00:00:00.001 Started by upstream project "autotest-per-patch" build number 131237 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.119 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.120 The recommended git tool is: git 00:00:00.120 using credential 00000000-0000-0000-0000-000000000002 00:00:00.123 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.165 Fetching changes from the remote Git repository 00:00:00.167 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.209 Using shallow fetch with depth 1 00:00:00.209 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.209 > git --version # timeout=10 00:00:00.236 > git --version # 'git version 2.39.2' 00:00:00.236 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.262 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.262 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.900 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.910 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.921 Checking out Revision 3f5fbcceba25866ebf7e22fd0e5d30548272f62c (FETCH_HEAD) 00:00:07.921 > git config core.sparsecheckout # timeout=10 00:00:07.930 > git read-tree -mu HEAD # timeout=10 00:00:07.944 > git checkout -f 3f5fbcceba25866ebf7e22fd0e5d30548272f62c # timeout=5 00:00:07.960 Commit message: "packer: Bump java's version" 00:00:07.960 > git rev-list --no-walk 3f5fbcceba25866ebf7e22fd0e5d30548272f62c # timeout=10 00:00:08.041 [Pipeline] Start of Pipeline 00:00:08.051 [Pipeline] library 00:00:08.052 Loading library shm_lib@master 00:00:08.053 Library shm_lib@master is cached. Copying from home. 00:00:08.071 [Pipeline] node 00:00:08.080 Running on VM-host-SM16 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:08.081 [Pipeline] { 00:00:08.090 [Pipeline] catchError 00:00:08.091 [Pipeline] { 00:00:08.100 [Pipeline] wrap 00:00:08.107 [Pipeline] { 00:00:08.112 [Pipeline] stage 00:00:08.114 [Pipeline] { (Prologue) 00:00:08.128 [Pipeline] echo 00:00:08.129 Node: VM-host-SM16 00:00:08.135 [Pipeline] cleanWs 00:00:08.144 [WS-CLEANUP] Deleting project workspace... 00:00:08.144 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.148 [WS-CLEANUP] done 00:00:08.326 [Pipeline] setCustomBuildProperty 00:00:08.416 [Pipeline] httpRequest 00:00:09.242 [Pipeline] echo 00:00:09.245 Sorcerer 10.211.164.101 is alive 00:00:09.254 [Pipeline] retry 00:00:09.257 [Pipeline] { 00:00:09.272 [Pipeline] httpRequest 00:00:09.276 HttpMethod: GET 00:00:09.276 URL: http://10.211.164.101/packages/jbp_3f5fbcceba25866ebf7e22fd0e5d30548272f62c.tar.gz 00:00:09.277 Sending request to url: http://10.211.164.101/packages/jbp_3f5fbcceba25866ebf7e22fd0e5d30548272f62c.tar.gz 00:00:09.293 Response Code: HTTP/1.1 200 OK 00:00:09.294 Success: Status code 200 is in the accepted range: 200,404 00:00:09.294 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_3f5fbcceba25866ebf7e22fd0e5d30548272f62c.tar.gz 00:00:32.787 [Pipeline] } 00:00:32.799 [Pipeline] // retry 00:00:32.808 [Pipeline] sh 00:00:33.087 + tar --no-same-owner -xf jbp_3f5fbcceba25866ebf7e22fd0e5d30548272f62c.tar.gz 00:00:33.100 [Pipeline] httpRequest 00:00:33.518 [Pipeline] echo 00:00:33.520 Sorcerer 10.211.164.101 is alive 00:00:33.531 [Pipeline] retry 00:00:33.533 [Pipeline] { 00:00:33.548 [Pipeline] httpRequest 00:00:33.553 HttpMethod: GET 00:00:33.554 URL: http://10.211.164.101/packages/spdk_27a8e04f9d3c09b72ba8306ba4b8ae7ef3f1e0ae.tar.gz 00:00:33.554 Sending request to url: http://10.211.164.101/packages/spdk_27a8e04f9d3c09b72ba8306ba4b8ae7ef3f1e0ae.tar.gz 00:00:33.569 Response Code: HTTP/1.1 200 OK 00:00:33.570 Success: Status code 200 is in the accepted range: 200,404 00:00:33.571 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_27a8e04f9d3c09b72ba8306ba4b8ae7ef3f1e0ae.tar.gz 00:01:06.296 [Pipeline] } 00:01:06.314 [Pipeline] // retry 00:01:06.323 [Pipeline] sh 00:01:06.606 + tar --no-same-owner -xf spdk_27a8e04f9d3c09b72ba8306ba4b8ae7ef3f1e0ae.tar.gz 00:01:09.903 [Pipeline] sh 00:01:10.188 + git -C spdk log --oneline -n5 00:01:10.188 27a8e04f9 lib/nvme: pre alloc the dma buffer for data-set-management 00:01:10.188 5a8c76d99 lib/nvmf: Add spdk_nvmf_send_discovery_log_notice API 00:01:10.188 a70c3a90b bdev/lvol: add allocated clusters num in bdev_lvol_get_lvols 00:01:10.188 c26697bf5 bdev_ut: Comparison operator and tests fixes 00:01:10.188 75a12cbf9 test: Comparison operator fixes 00:01:10.209 [Pipeline] writeFile 00:01:10.225 [Pipeline] sh 00:01:10.523 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:10.535 [Pipeline] sh 00:01:10.816 + cat autorun-spdk.conf 00:01:10.816 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:10.816 SPDK_TEST_NVMF=1 00:01:10.816 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:10.816 SPDK_TEST_URING=1 00:01:10.816 SPDK_TEST_USDT=1 00:01:10.816 SPDK_RUN_UBSAN=1 00:01:10.816 NET_TYPE=virt 00:01:10.816 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:10.823 RUN_NIGHTLY=0 00:01:10.825 [Pipeline] } 00:01:10.839 [Pipeline] // stage 00:01:10.856 [Pipeline] stage 00:01:10.858 [Pipeline] { (Run VM) 00:01:10.872 [Pipeline] sh 00:01:11.152 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:11.152 + echo 'Start stage prepare_nvme.sh' 00:01:11.152 Start stage prepare_nvme.sh 00:01:11.152 + [[ -n 2 ]] 00:01:11.152 + disk_prefix=ex2 00:01:11.152 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:11.152 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:11.152 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:11.152 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:11.152 ++ SPDK_TEST_NVMF=1 00:01:11.152 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:11.152 ++ SPDK_TEST_URING=1 00:01:11.152 ++ SPDK_TEST_USDT=1 00:01:11.152 ++ SPDK_RUN_UBSAN=1 00:01:11.152 ++ NET_TYPE=virt 00:01:11.152 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:11.152 ++ RUN_NIGHTLY=0 00:01:11.152 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:11.152 + nvme_files=() 00:01:11.152 + declare -A nvme_files 00:01:11.152 + backend_dir=/var/lib/libvirt/images/backends 00:01:11.152 + nvme_files['nvme.img']=5G 00:01:11.152 + nvme_files['nvme-cmb.img']=5G 00:01:11.152 + nvme_files['nvme-multi0.img']=4G 00:01:11.152 + nvme_files['nvme-multi1.img']=4G 00:01:11.152 + nvme_files['nvme-multi2.img']=4G 00:01:11.152 + nvme_files['nvme-openstack.img']=8G 00:01:11.152 + nvme_files['nvme-zns.img']=5G 00:01:11.152 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:11.152 + (( SPDK_TEST_FTL == 1 )) 00:01:11.152 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:11.152 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:11.152 + for nvme in "${!nvme_files[@]}" 00:01:11.152 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:01:11.152 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:11.152 + for nvme in "${!nvme_files[@]}" 00:01:11.152 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:01:11.152 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:11.152 + for nvme in "${!nvme_files[@]}" 00:01:11.152 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:01:11.152 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:11.152 + for nvme in "${!nvme_files[@]}" 00:01:11.152 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:01:11.152 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:11.152 + for nvme in "${!nvme_files[@]}" 00:01:11.152 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:01:11.152 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:11.152 + for nvme in "${!nvme_files[@]}" 00:01:11.152 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:01:11.152 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:11.152 + for nvme in "${!nvme_files[@]}" 00:01:11.152 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:01:11.720 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:11.720 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:01:11.720 + echo 'End stage prepare_nvme.sh' 00:01:11.720 End stage prepare_nvme.sh 00:01:11.732 [Pipeline] sh 00:01:12.013 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:12.013 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -H -a -v -f fedora39 00:01:12.013 00:01:12.013 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:12.013 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:12.013 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:12.013 HELP=0 00:01:12.013 DRY_RUN=0 00:01:12.013 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img, 00:01:12.013 NVME_DISKS_TYPE=nvme,nvme, 00:01:12.013 NVME_AUTO_CREATE=0 00:01:12.013 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img, 00:01:12.013 NVME_CMB=,, 00:01:12.013 NVME_PMR=,, 00:01:12.013 NVME_ZNS=,, 00:01:12.013 NVME_MS=,, 00:01:12.013 NVME_FDP=,, 00:01:12.013 SPDK_VAGRANT_DISTRO=fedora39 00:01:12.013 SPDK_VAGRANT_VMCPU=10 00:01:12.013 SPDK_VAGRANT_VMRAM=12288 00:01:12.013 SPDK_VAGRANT_PROVIDER=libvirt 00:01:12.013 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:12.013 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:12.013 SPDK_OPENSTACK_NETWORK=0 00:01:12.013 VAGRANT_PACKAGE_BOX=0 00:01:12.013 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:12.013 FORCE_DISTRO=true 00:01:12.014 VAGRANT_BOX_VERSION= 00:01:12.014 EXTRA_VAGRANTFILES= 00:01:12.014 NIC_MODEL=e1000 00:01:12.014 00:01:12.014 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:01:12.014 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:15.297 Bringing machine 'default' up with 'libvirt' provider... 00:01:15.297 ==> default: Creating image (snapshot of base box volume). 00:01:15.557 ==> default: Creating domain with the following settings... 00:01:15.557 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1729070139_b7e40cb1486637f421f7 00:01:15.557 ==> default: -- Domain type: kvm 00:01:15.557 ==> default: -- Cpus: 10 00:01:15.557 ==> default: -- Feature: acpi 00:01:15.557 ==> default: -- Feature: apic 00:01:15.557 ==> default: -- Feature: pae 00:01:15.557 ==> default: -- Memory: 12288M 00:01:15.557 ==> default: -- Memory Backing: hugepages: 00:01:15.557 ==> default: -- Management MAC: 00:01:15.557 ==> default: -- Loader: 00:01:15.557 ==> default: -- Nvram: 00:01:15.557 ==> default: -- Base box: spdk/fedora39 00:01:15.557 ==> default: -- Storage pool: default 00:01:15.557 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1729070139_b7e40cb1486637f421f7.img (20G) 00:01:15.557 ==> default: -- Volume Cache: default 00:01:15.557 ==> default: -- Kernel: 00:01:15.557 ==> default: -- Initrd: 00:01:15.557 ==> default: -- Graphics Type: vnc 00:01:15.557 ==> default: -- Graphics Port: -1 00:01:15.557 ==> default: -- Graphics IP: 127.0.0.1 00:01:15.557 ==> default: -- Graphics Password: Not defined 00:01:15.557 ==> default: -- Video Type: cirrus 00:01:15.557 ==> default: -- Video VRAM: 9216 00:01:15.557 ==> default: -- Sound Type: 00:01:15.557 ==> default: -- Keymap: en-us 00:01:15.557 ==> default: -- TPM Path: 00:01:15.557 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:15.557 ==> default: -- Command line args: 00:01:15.557 ==> default: -> value=-device, 00:01:15.557 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:15.557 ==> default: -> value=-drive, 00:01:15.557 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:01:15.557 ==> default: -> value=-device, 00:01:15.557 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:15.557 ==> default: -> value=-device, 00:01:15.557 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:15.557 ==> default: -> value=-drive, 00:01:15.557 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:15.557 ==> default: -> value=-device, 00:01:15.557 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:15.557 ==> default: -> value=-drive, 00:01:15.557 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:15.557 ==> default: -> value=-device, 00:01:15.557 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:15.557 ==> default: -> value=-drive, 00:01:15.557 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:15.557 ==> default: -> value=-device, 00:01:15.557 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:15.557 ==> default: Creating shared folders metadata... 00:01:15.557 ==> default: Starting domain. 00:01:16.933 ==> default: Waiting for domain to get an IP address... 00:01:35.058 ==> default: Waiting for SSH to become available... 00:01:35.058 ==> default: Configuring and enabling network interfaces... 00:01:38.343 default: SSH address: 192.168.121.164:22 00:01:38.343 default: SSH username: vagrant 00:01:38.343 default: SSH auth method: private key 00:01:40.246 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:48.362 ==> default: Mounting SSHFS shared folder... 00:01:49.741 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:49.741 ==> default: Checking Mount.. 00:01:51.122 ==> default: Folder Successfully Mounted! 00:01:51.122 ==> default: Running provisioner: file... 00:01:51.689 default: ~/.gitconfig => .gitconfig 00:01:52.256 00:01:52.256 SUCCESS! 00:01:52.256 00:01:52.256 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:52.256 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:52.256 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:52.256 00:01:52.265 [Pipeline] } 00:01:52.281 [Pipeline] // stage 00:01:52.291 [Pipeline] dir 00:01:52.291 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:01:52.293 [Pipeline] { 00:01:52.304 [Pipeline] catchError 00:01:52.306 [Pipeline] { 00:01:52.318 [Pipeline] sh 00:01:52.597 + vagrant ssh-config --host vagrant 00:01:52.597 + sed -ne /^Host/,$p 00:01:52.597 + tee ssh_conf 00:01:55.878 Host vagrant 00:01:55.878 HostName 192.168.121.164 00:01:55.878 User vagrant 00:01:55.878 Port 22 00:01:55.878 UserKnownHostsFile /dev/null 00:01:55.878 StrictHostKeyChecking no 00:01:55.878 PasswordAuthentication no 00:01:55.878 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:55.878 IdentitiesOnly yes 00:01:55.878 LogLevel FATAL 00:01:55.878 ForwardAgent yes 00:01:55.878 ForwardX11 yes 00:01:55.878 00:01:55.893 [Pipeline] withEnv 00:01:55.895 [Pipeline] { 00:01:55.909 [Pipeline] sh 00:01:56.251 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:56.251 source /etc/os-release 00:01:56.251 [[ -e /image.version ]] && img=$(< /image.version) 00:01:56.251 # Minimal, systemd-like check. 00:01:56.251 if [[ -e /.dockerenv ]]; then 00:01:56.251 # Clear garbage from the node's name: 00:01:56.251 # agt-er_autotest_547-896 -> autotest_547-896 00:01:56.251 # $HOSTNAME is the actual container id 00:01:56.251 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:56.251 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:56.251 # We can assume this is a mount from a host where container is running, 00:01:56.251 # so fetch its hostname to easily identify the target swarm worker. 00:01:56.251 container="$(< /etc/hostname) ($agent)" 00:01:56.251 else 00:01:56.251 # Fallback 00:01:56.251 container=$agent 00:01:56.251 fi 00:01:56.251 fi 00:01:56.251 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:56.251 00:01:56.263 [Pipeline] } 00:01:56.280 [Pipeline] // withEnv 00:01:56.289 [Pipeline] setCustomBuildProperty 00:01:56.303 [Pipeline] stage 00:01:56.305 [Pipeline] { (Tests) 00:01:56.322 [Pipeline] sh 00:01:56.602 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:56.875 [Pipeline] sh 00:01:57.154 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:57.168 [Pipeline] timeout 00:01:57.168 Timeout set to expire in 1 hr 0 min 00:01:57.170 [Pipeline] { 00:01:57.185 [Pipeline] sh 00:01:57.464 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:58.031 HEAD is now at 27a8e04f9 lib/nvme: pre alloc the dma buffer for data-set-management 00:01:58.044 [Pipeline] sh 00:01:58.325 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:58.597 [Pipeline] sh 00:01:58.877 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:59.153 [Pipeline] sh 00:01:59.437 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:01:59.697 ++ readlink -f spdk_repo 00:01:59.697 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:59.697 + [[ -n /home/vagrant/spdk_repo ]] 00:01:59.697 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:59.697 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:59.697 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:59.697 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:59.697 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:59.697 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:01:59.697 + cd /home/vagrant/spdk_repo 00:01:59.697 + source /etc/os-release 00:01:59.697 ++ NAME='Fedora Linux' 00:01:59.697 ++ VERSION='39 (Cloud Edition)' 00:01:59.697 ++ ID=fedora 00:01:59.697 ++ VERSION_ID=39 00:01:59.697 ++ VERSION_CODENAME= 00:01:59.697 ++ PLATFORM_ID=platform:f39 00:01:59.697 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:59.697 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:59.697 ++ LOGO=fedora-logo-icon 00:01:59.697 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:59.697 ++ HOME_URL=https://fedoraproject.org/ 00:01:59.697 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:59.697 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:59.697 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:59.697 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:59.697 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:59.697 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:59.697 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:59.697 ++ SUPPORT_END=2024-11-12 00:01:59.697 ++ VARIANT='Cloud Edition' 00:01:59.697 ++ VARIANT_ID=cloud 00:01:59.697 + uname -a 00:01:59.697 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:59.697 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:59.956 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:59.956 Hugepages 00:01:59.956 node hugesize free / total 00:02:00.215 node0 1048576kB 0 / 0 00:02:00.215 node0 2048kB 0 / 0 00:02:00.215 00:02:00.215 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:00.215 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:00.215 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:00.215 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:00.215 + rm -f /tmp/spdk-ld-path 00:02:00.215 + source autorun-spdk.conf 00:02:00.215 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:00.215 ++ SPDK_TEST_NVMF=1 00:02:00.215 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:00.215 ++ SPDK_TEST_URING=1 00:02:00.215 ++ SPDK_TEST_USDT=1 00:02:00.215 ++ SPDK_RUN_UBSAN=1 00:02:00.215 ++ NET_TYPE=virt 00:02:00.215 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:00.215 ++ RUN_NIGHTLY=0 00:02:00.215 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:00.215 + [[ -n '' ]] 00:02:00.215 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:00.215 + for M in /var/spdk/build-*-manifest.txt 00:02:00.215 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:00.215 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:00.215 + for M in /var/spdk/build-*-manifest.txt 00:02:00.215 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:00.215 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:00.215 + for M in /var/spdk/build-*-manifest.txt 00:02:00.215 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:00.215 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:00.215 ++ uname 00:02:00.215 + [[ Linux == \L\i\n\u\x ]] 00:02:00.215 + sudo dmesg -T 00:02:00.215 + sudo dmesg --clear 00:02:00.215 + dmesg_pid=5368 00:02:00.215 + [[ Fedora Linux == FreeBSD ]] 00:02:00.215 + sudo dmesg -Tw 00:02:00.215 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:00.215 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:00.215 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:00.215 + [[ -x /usr/src/fio-static/fio ]] 00:02:00.215 + export FIO_BIN=/usr/src/fio-static/fio 00:02:00.215 + FIO_BIN=/usr/src/fio-static/fio 00:02:00.215 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:00.215 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:00.215 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:00.215 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:00.215 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:00.215 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:00.215 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:00.215 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:00.215 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:00.215 Test configuration: 00:02:00.215 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:00.215 SPDK_TEST_NVMF=1 00:02:00.215 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:00.215 SPDK_TEST_URING=1 00:02:00.215 SPDK_TEST_USDT=1 00:02:00.215 SPDK_RUN_UBSAN=1 00:02:00.215 NET_TYPE=virt 00:02:00.215 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:00.474 RUN_NIGHTLY=0 09:16:24 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:00.474 09:16:24 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:00.474 09:16:24 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:00.474 09:16:24 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:00.474 09:16:24 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:00.474 09:16:24 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:00.474 09:16:24 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:00.474 09:16:24 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:00.474 09:16:24 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:00.474 09:16:24 -- paths/export.sh@5 -- $ export PATH 00:02:00.475 09:16:24 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:00.475 09:16:24 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:00.475 09:16:24 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:00.475 09:16:24 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1729070184.XXXXXX 00:02:00.475 09:16:24 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1729070184.dTrZjc 00:02:00.475 09:16:24 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:00.475 09:16:24 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:02:00.475 09:16:24 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:00.475 09:16:24 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:00.475 09:16:24 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:00.475 09:16:24 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:00.475 09:16:24 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:00.475 09:16:24 -- common/autotest_common.sh@10 -- $ set +x 00:02:00.475 09:16:24 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:02:00.475 09:16:24 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:00.475 09:16:24 -- pm/common@17 -- $ local monitor 00:02:00.475 09:16:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:00.475 09:16:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:00.475 09:16:24 -- pm/common@25 -- $ sleep 1 00:02:00.475 09:16:24 -- pm/common@21 -- $ date +%s 00:02:00.475 09:16:24 -- pm/common@21 -- $ date +%s 00:02:00.475 09:16:24 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1729070184 00:02:00.475 09:16:24 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1729070184 00:02:00.475 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1729070184_collect-cpu-load.pm.log 00:02:00.475 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1729070184_collect-vmstat.pm.log 00:02:01.410 09:16:25 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:01.410 09:16:25 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:01.410 09:16:25 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:01.410 09:16:25 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:01.410 09:16:25 -- spdk/autobuild.sh@16 -- $ date -u 00:02:01.410 Wed Oct 16 09:16:25 AM UTC 2024 00:02:01.410 09:16:25 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:01.410 v25.01-pre-71-g27a8e04f9 00:02:01.410 09:16:25 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:01.410 09:16:25 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:01.410 09:16:25 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:01.410 09:16:25 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:01.410 09:16:25 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:01.410 09:16:25 -- common/autotest_common.sh@10 -- $ set +x 00:02:01.410 ************************************ 00:02:01.410 START TEST ubsan 00:02:01.410 ************************************ 00:02:01.410 using ubsan 00:02:01.410 09:16:25 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:01.410 00:02:01.410 real 0m0.000s 00:02:01.410 user 0m0.000s 00:02:01.410 sys 0m0.000s 00:02:01.410 09:16:25 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:01.410 09:16:25 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:01.410 ************************************ 00:02:01.410 END TEST ubsan 00:02:01.410 ************************************ 00:02:01.410 09:16:25 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:01.410 09:16:25 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:01.410 09:16:25 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:01.410 09:16:25 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:01.410 09:16:25 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:01.410 09:16:25 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:01.410 09:16:25 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:01.410 09:16:25 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:01.410 09:16:25 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:02:01.671 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:01.671 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:01.929 Using 'verbs' RDMA provider 00:02:17.776 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:29.982 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:29.982 Creating mk/config.mk...done. 00:02:29.982 Creating mk/cc.flags.mk...done. 00:02:29.982 Type 'make' to build. 00:02:29.982 09:16:53 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:29.982 09:16:53 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:29.982 09:16:53 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:29.982 09:16:53 -- common/autotest_common.sh@10 -- $ set +x 00:02:29.982 ************************************ 00:02:29.982 START TEST make 00:02:29.982 ************************************ 00:02:29.982 09:16:53 make -- common/autotest_common.sh@1125 -- $ make -j10 00:02:29.982 make[1]: Nothing to be done for 'all'. 00:02:42.213 The Meson build system 00:02:42.213 Version: 1.5.0 00:02:42.213 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:42.213 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:42.213 Build type: native build 00:02:42.213 Program cat found: YES (/usr/bin/cat) 00:02:42.213 Project name: DPDK 00:02:42.213 Project version: 24.03.0 00:02:42.213 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:42.213 C linker for the host machine: cc ld.bfd 2.40-14 00:02:42.213 Host machine cpu family: x86_64 00:02:42.213 Host machine cpu: x86_64 00:02:42.213 Message: ## Building in Developer Mode ## 00:02:42.213 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:42.213 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:42.213 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:42.213 Program python3 found: YES (/usr/bin/python3) 00:02:42.213 Program cat found: YES (/usr/bin/cat) 00:02:42.213 Compiler for C supports arguments -march=native: YES 00:02:42.213 Checking for size of "void *" : 8 00:02:42.213 Checking for size of "void *" : 8 (cached) 00:02:42.213 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:42.213 Library m found: YES 00:02:42.213 Library numa found: YES 00:02:42.213 Has header "numaif.h" : YES 00:02:42.213 Library fdt found: NO 00:02:42.213 Library execinfo found: NO 00:02:42.213 Has header "execinfo.h" : YES 00:02:42.213 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:42.213 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:42.213 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:42.213 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:42.213 Run-time dependency openssl found: YES 3.1.1 00:02:42.213 Run-time dependency libpcap found: YES 1.10.4 00:02:42.213 Has header "pcap.h" with dependency libpcap: YES 00:02:42.213 Compiler for C supports arguments -Wcast-qual: YES 00:02:42.213 Compiler for C supports arguments -Wdeprecated: YES 00:02:42.213 Compiler for C supports arguments -Wformat: YES 00:02:42.213 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:42.213 Compiler for C supports arguments -Wformat-security: NO 00:02:42.213 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:42.213 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:42.213 Compiler for C supports arguments -Wnested-externs: YES 00:02:42.213 Compiler for C supports arguments -Wold-style-definition: YES 00:02:42.213 Compiler for C supports arguments -Wpointer-arith: YES 00:02:42.213 Compiler for C supports arguments -Wsign-compare: YES 00:02:42.213 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:42.213 Compiler for C supports arguments -Wundef: YES 00:02:42.213 Compiler for C supports arguments -Wwrite-strings: YES 00:02:42.213 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:42.213 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:42.213 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:42.213 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:42.213 Program objdump found: YES (/usr/bin/objdump) 00:02:42.213 Compiler for C supports arguments -mavx512f: YES 00:02:42.213 Checking if "AVX512 checking" compiles: YES 00:02:42.213 Fetching value of define "__SSE4_2__" : 1 00:02:42.213 Fetching value of define "__AES__" : 1 00:02:42.213 Fetching value of define "__AVX__" : 1 00:02:42.213 Fetching value of define "__AVX2__" : 1 00:02:42.213 Fetching value of define "__AVX512BW__" : (undefined) 00:02:42.213 Fetching value of define "__AVX512CD__" : (undefined) 00:02:42.213 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:42.213 Fetching value of define "__AVX512F__" : (undefined) 00:02:42.213 Fetching value of define "__AVX512VL__" : (undefined) 00:02:42.213 Fetching value of define "__PCLMUL__" : 1 00:02:42.213 Fetching value of define "__RDRND__" : 1 00:02:42.213 Fetching value of define "__RDSEED__" : 1 00:02:42.213 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:42.213 Fetching value of define "__znver1__" : (undefined) 00:02:42.213 Fetching value of define "__znver2__" : (undefined) 00:02:42.213 Fetching value of define "__znver3__" : (undefined) 00:02:42.213 Fetching value of define "__znver4__" : (undefined) 00:02:42.213 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:42.213 Message: lib/log: Defining dependency "log" 00:02:42.213 Message: lib/kvargs: Defining dependency "kvargs" 00:02:42.213 Message: lib/telemetry: Defining dependency "telemetry" 00:02:42.213 Checking for function "getentropy" : NO 00:02:42.213 Message: lib/eal: Defining dependency "eal" 00:02:42.213 Message: lib/ring: Defining dependency "ring" 00:02:42.213 Message: lib/rcu: Defining dependency "rcu" 00:02:42.213 Message: lib/mempool: Defining dependency "mempool" 00:02:42.213 Message: lib/mbuf: Defining dependency "mbuf" 00:02:42.213 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:42.213 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:42.213 Compiler for C supports arguments -mpclmul: YES 00:02:42.213 Compiler for C supports arguments -maes: YES 00:02:42.213 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:42.213 Compiler for C supports arguments -mavx512bw: YES 00:02:42.213 Compiler for C supports arguments -mavx512dq: YES 00:02:42.213 Compiler for C supports arguments -mavx512vl: YES 00:02:42.213 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:42.213 Compiler for C supports arguments -mavx2: YES 00:02:42.213 Compiler for C supports arguments -mavx: YES 00:02:42.213 Message: lib/net: Defining dependency "net" 00:02:42.213 Message: lib/meter: Defining dependency "meter" 00:02:42.213 Message: lib/ethdev: Defining dependency "ethdev" 00:02:42.213 Message: lib/pci: Defining dependency "pci" 00:02:42.213 Message: lib/cmdline: Defining dependency "cmdline" 00:02:42.213 Message: lib/hash: Defining dependency "hash" 00:02:42.213 Message: lib/timer: Defining dependency "timer" 00:02:42.213 Message: lib/compressdev: Defining dependency "compressdev" 00:02:42.213 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:42.213 Message: lib/dmadev: Defining dependency "dmadev" 00:02:42.213 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:42.213 Message: lib/power: Defining dependency "power" 00:02:42.213 Message: lib/reorder: Defining dependency "reorder" 00:02:42.213 Message: lib/security: Defining dependency "security" 00:02:42.213 Has header "linux/userfaultfd.h" : YES 00:02:42.213 Has header "linux/vduse.h" : YES 00:02:42.213 Message: lib/vhost: Defining dependency "vhost" 00:02:42.213 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:42.213 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:42.214 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:42.214 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:42.214 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:42.214 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:42.214 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:42.214 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:42.214 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:42.214 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:42.214 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:42.214 Configuring doxy-api-html.conf using configuration 00:02:42.214 Configuring doxy-api-man.conf using configuration 00:02:42.214 Program mandb found: YES (/usr/bin/mandb) 00:02:42.214 Program sphinx-build found: NO 00:02:42.214 Configuring rte_build_config.h using configuration 00:02:42.214 Message: 00:02:42.214 ================= 00:02:42.214 Applications Enabled 00:02:42.214 ================= 00:02:42.214 00:02:42.214 apps: 00:02:42.214 00:02:42.214 00:02:42.214 Message: 00:02:42.214 ================= 00:02:42.214 Libraries Enabled 00:02:42.214 ================= 00:02:42.214 00:02:42.214 libs: 00:02:42.214 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:42.214 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:42.214 cryptodev, dmadev, power, reorder, security, vhost, 00:02:42.214 00:02:42.214 Message: 00:02:42.214 =============== 00:02:42.214 Drivers Enabled 00:02:42.214 =============== 00:02:42.214 00:02:42.214 common: 00:02:42.214 00:02:42.214 bus: 00:02:42.214 pci, vdev, 00:02:42.214 mempool: 00:02:42.214 ring, 00:02:42.214 dma: 00:02:42.214 00:02:42.214 net: 00:02:42.214 00:02:42.214 crypto: 00:02:42.214 00:02:42.214 compress: 00:02:42.214 00:02:42.214 vdpa: 00:02:42.214 00:02:42.214 00:02:42.214 Message: 00:02:42.214 ================= 00:02:42.214 Content Skipped 00:02:42.214 ================= 00:02:42.214 00:02:42.214 apps: 00:02:42.214 dumpcap: explicitly disabled via build config 00:02:42.214 graph: explicitly disabled via build config 00:02:42.214 pdump: explicitly disabled via build config 00:02:42.214 proc-info: explicitly disabled via build config 00:02:42.214 test-acl: explicitly disabled via build config 00:02:42.214 test-bbdev: explicitly disabled via build config 00:02:42.214 test-cmdline: explicitly disabled via build config 00:02:42.214 test-compress-perf: explicitly disabled via build config 00:02:42.214 test-crypto-perf: explicitly disabled via build config 00:02:42.214 test-dma-perf: explicitly disabled via build config 00:02:42.214 test-eventdev: explicitly disabled via build config 00:02:42.214 test-fib: explicitly disabled via build config 00:02:42.214 test-flow-perf: explicitly disabled via build config 00:02:42.214 test-gpudev: explicitly disabled via build config 00:02:42.214 test-mldev: explicitly disabled via build config 00:02:42.214 test-pipeline: explicitly disabled via build config 00:02:42.214 test-pmd: explicitly disabled via build config 00:02:42.214 test-regex: explicitly disabled via build config 00:02:42.214 test-sad: explicitly disabled via build config 00:02:42.214 test-security-perf: explicitly disabled via build config 00:02:42.214 00:02:42.214 libs: 00:02:42.214 argparse: explicitly disabled via build config 00:02:42.214 metrics: explicitly disabled via build config 00:02:42.214 acl: explicitly disabled via build config 00:02:42.214 bbdev: explicitly disabled via build config 00:02:42.214 bitratestats: explicitly disabled via build config 00:02:42.214 bpf: explicitly disabled via build config 00:02:42.214 cfgfile: explicitly disabled via build config 00:02:42.214 distributor: explicitly disabled via build config 00:02:42.214 efd: explicitly disabled via build config 00:02:42.214 eventdev: explicitly disabled via build config 00:02:42.214 dispatcher: explicitly disabled via build config 00:02:42.214 gpudev: explicitly disabled via build config 00:02:42.214 gro: explicitly disabled via build config 00:02:42.214 gso: explicitly disabled via build config 00:02:42.214 ip_frag: explicitly disabled via build config 00:02:42.214 jobstats: explicitly disabled via build config 00:02:42.214 latencystats: explicitly disabled via build config 00:02:42.214 lpm: explicitly disabled via build config 00:02:42.214 member: explicitly disabled via build config 00:02:42.214 pcapng: explicitly disabled via build config 00:02:42.214 rawdev: explicitly disabled via build config 00:02:42.214 regexdev: explicitly disabled via build config 00:02:42.214 mldev: explicitly disabled via build config 00:02:42.214 rib: explicitly disabled via build config 00:02:42.214 sched: explicitly disabled via build config 00:02:42.214 stack: explicitly disabled via build config 00:02:42.214 ipsec: explicitly disabled via build config 00:02:42.214 pdcp: explicitly disabled via build config 00:02:42.214 fib: explicitly disabled via build config 00:02:42.214 port: explicitly disabled via build config 00:02:42.214 pdump: explicitly disabled via build config 00:02:42.214 table: explicitly disabled via build config 00:02:42.214 pipeline: explicitly disabled via build config 00:02:42.214 graph: explicitly disabled via build config 00:02:42.214 node: explicitly disabled via build config 00:02:42.214 00:02:42.214 drivers: 00:02:42.214 common/cpt: not in enabled drivers build config 00:02:42.214 common/dpaax: not in enabled drivers build config 00:02:42.214 common/iavf: not in enabled drivers build config 00:02:42.214 common/idpf: not in enabled drivers build config 00:02:42.214 common/ionic: not in enabled drivers build config 00:02:42.214 common/mvep: not in enabled drivers build config 00:02:42.214 common/octeontx: not in enabled drivers build config 00:02:42.214 bus/auxiliary: not in enabled drivers build config 00:02:42.214 bus/cdx: not in enabled drivers build config 00:02:42.214 bus/dpaa: not in enabled drivers build config 00:02:42.214 bus/fslmc: not in enabled drivers build config 00:02:42.214 bus/ifpga: not in enabled drivers build config 00:02:42.214 bus/platform: not in enabled drivers build config 00:02:42.214 bus/uacce: not in enabled drivers build config 00:02:42.214 bus/vmbus: not in enabled drivers build config 00:02:42.214 common/cnxk: not in enabled drivers build config 00:02:42.214 common/mlx5: not in enabled drivers build config 00:02:42.214 common/nfp: not in enabled drivers build config 00:02:42.214 common/nitrox: not in enabled drivers build config 00:02:42.214 common/qat: not in enabled drivers build config 00:02:42.214 common/sfc_efx: not in enabled drivers build config 00:02:42.214 mempool/bucket: not in enabled drivers build config 00:02:42.214 mempool/cnxk: not in enabled drivers build config 00:02:42.214 mempool/dpaa: not in enabled drivers build config 00:02:42.214 mempool/dpaa2: not in enabled drivers build config 00:02:42.214 mempool/octeontx: not in enabled drivers build config 00:02:42.214 mempool/stack: not in enabled drivers build config 00:02:42.214 dma/cnxk: not in enabled drivers build config 00:02:42.214 dma/dpaa: not in enabled drivers build config 00:02:42.214 dma/dpaa2: not in enabled drivers build config 00:02:42.214 dma/hisilicon: not in enabled drivers build config 00:02:42.214 dma/idxd: not in enabled drivers build config 00:02:42.214 dma/ioat: not in enabled drivers build config 00:02:42.214 dma/skeleton: not in enabled drivers build config 00:02:42.214 net/af_packet: not in enabled drivers build config 00:02:42.214 net/af_xdp: not in enabled drivers build config 00:02:42.214 net/ark: not in enabled drivers build config 00:02:42.214 net/atlantic: not in enabled drivers build config 00:02:42.214 net/avp: not in enabled drivers build config 00:02:42.214 net/axgbe: not in enabled drivers build config 00:02:42.214 net/bnx2x: not in enabled drivers build config 00:02:42.214 net/bnxt: not in enabled drivers build config 00:02:42.214 net/bonding: not in enabled drivers build config 00:02:42.214 net/cnxk: not in enabled drivers build config 00:02:42.214 net/cpfl: not in enabled drivers build config 00:02:42.214 net/cxgbe: not in enabled drivers build config 00:02:42.214 net/dpaa: not in enabled drivers build config 00:02:42.214 net/dpaa2: not in enabled drivers build config 00:02:42.214 net/e1000: not in enabled drivers build config 00:02:42.214 net/ena: not in enabled drivers build config 00:02:42.214 net/enetc: not in enabled drivers build config 00:02:42.214 net/enetfec: not in enabled drivers build config 00:02:42.214 net/enic: not in enabled drivers build config 00:02:42.214 net/failsafe: not in enabled drivers build config 00:02:42.214 net/fm10k: not in enabled drivers build config 00:02:42.214 net/gve: not in enabled drivers build config 00:02:42.214 net/hinic: not in enabled drivers build config 00:02:42.214 net/hns3: not in enabled drivers build config 00:02:42.214 net/i40e: not in enabled drivers build config 00:02:42.214 net/iavf: not in enabled drivers build config 00:02:42.214 net/ice: not in enabled drivers build config 00:02:42.214 net/idpf: not in enabled drivers build config 00:02:42.214 net/igc: not in enabled drivers build config 00:02:42.214 net/ionic: not in enabled drivers build config 00:02:42.214 net/ipn3ke: not in enabled drivers build config 00:02:42.214 net/ixgbe: not in enabled drivers build config 00:02:42.214 net/mana: not in enabled drivers build config 00:02:42.214 net/memif: not in enabled drivers build config 00:02:42.214 net/mlx4: not in enabled drivers build config 00:02:42.214 net/mlx5: not in enabled drivers build config 00:02:42.214 net/mvneta: not in enabled drivers build config 00:02:42.214 net/mvpp2: not in enabled drivers build config 00:02:42.214 net/netvsc: not in enabled drivers build config 00:02:42.214 net/nfb: not in enabled drivers build config 00:02:42.214 net/nfp: not in enabled drivers build config 00:02:42.214 net/ngbe: not in enabled drivers build config 00:02:42.214 net/null: not in enabled drivers build config 00:02:42.214 net/octeontx: not in enabled drivers build config 00:02:42.214 net/octeon_ep: not in enabled drivers build config 00:02:42.214 net/pcap: not in enabled drivers build config 00:02:42.214 net/pfe: not in enabled drivers build config 00:02:42.214 net/qede: not in enabled drivers build config 00:02:42.214 net/ring: not in enabled drivers build config 00:02:42.214 net/sfc: not in enabled drivers build config 00:02:42.214 net/softnic: not in enabled drivers build config 00:02:42.214 net/tap: not in enabled drivers build config 00:02:42.214 net/thunderx: not in enabled drivers build config 00:02:42.214 net/txgbe: not in enabled drivers build config 00:02:42.214 net/vdev_netvsc: not in enabled drivers build config 00:02:42.215 net/vhost: not in enabled drivers build config 00:02:42.215 net/virtio: not in enabled drivers build config 00:02:42.215 net/vmxnet3: not in enabled drivers build config 00:02:42.215 raw/*: missing internal dependency, "rawdev" 00:02:42.215 crypto/armv8: not in enabled drivers build config 00:02:42.215 crypto/bcmfs: not in enabled drivers build config 00:02:42.215 crypto/caam_jr: not in enabled drivers build config 00:02:42.215 crypto/ccp: not in enabled drivers build config 00:02:42.215 crypto/cnxk: not in enabled drivers build config 00:02:42.215 crypto/dpaa_sec: not in enabled drivers build config 00:02:42.215 crypto/dpaa2_sec: not in enabled drivers build config 00:02:42.215 crypto/ipsec_mb: not in enabled drivers build config 00:02:42.215 crypto/mlx5: not in enabled drivers build config 00:02:42.215 crypto/mvsam: not in enabled drivers build config 00:02:42.215 crypto/nitrox: not in enabled drivers build config 00:02:42.215 crypto/null: not in enabled drivers build config 00:02:42.215 crypto/octeontx: not in enabled drivers build config 00:02:42.215 crypto/openssl: not in enabled drivers build config 00:02:42.215 crypto/scheduler: not in enabled drivers build config 00:02:42.215 crypto/uadk: not in enabled drivers build config 00:02:42.215 crypto/virtio: not in enabled drivers build config 00:02:42.215 compress/isal: not in enabled drivers build config 00:02:42.215 compress/mlx5: not in enabled drivers build config 00:02:42.215 compress/nitrox: not in enabled drivers build config 00:02:42.215 compress/octeontx: not in enabled drivers build config 00:02:42.215 compress/zlib: not in enabled drivers build config 00:02:42.215 regex/*: missing internal dependency, "regexdev" 00:02:42.215 ml/*: missing internal dependency, "mldev" 00:02:42.215 vdpa/ifc: not in enabled drivers build config 00:02:42.215 vdpa/mlx5: not in enabled drivers build config 00:02:42.215 vdpa/nfp: not in enabled drivers build config 00:02:42.215 vdpa/sfc: not in enabled drivers build config 00:02:42.215 event/*: missing internal dependency, "eventdev" 00:02:42.215 baseband/*: missing internal dependency, "bbdev" 00:02:42.215 gpu/*: missing internal dependency, "gpudev" 00:02:42.215 00:02:42.215 00:02:42.215 Build targets in project: 85 00:02:42.215 00:02:42.215 DPDK 24.03.0 00:02:42.215 00:02:42.215 User defined options 00:02:42.215 buildtype : debug 00:02:42.215 default_library : shared 00:02:42.215 libdir : lib 00:02:42.215 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:42.215 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:42.215 c_link_args : 00:02:42.215 cpu_instruction_set: native 00:02:42.215 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:42.215 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:42.215 enable_docs : false 00:02:42.215 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:42.215 enable_kmods : false 00:02:42.215 max_lcores : 128 00:02:42.215 tests : false 00:02:42.215 00:02:42.215 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:42.215 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:42.215 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:42.215 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:42.215 [3/268] Linking static target lib/librte_kvargs.a 00:02:42.215 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:42.215 [5/268] Linking static target lib/librte_log.a 00:02:42.215 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:42.215 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.215 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:42.215 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:42.474 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:42.474 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:42.474 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:42.474 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:42.474 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:42.474 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:42.733 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:42.733 [17/268] Linking static target lib/librte_telemetry.a 00:02:42.733 [18/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.733 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:42.733 [20/268] Linking target lib/librte_log.so.24.1 00:02:42.991 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:42.991 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:43.250 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:43.250 [24/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:43.250 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:43.250 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:43.250 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:43.508 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:43.508 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:43.508 [30/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.508 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:43.508 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:43.508 [33/268] Linking target lib/librte_telemetry.so.24.1 00:02:43.767 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:43.767 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:43.767 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:44.025 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:44.283 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:44.283 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:44.283 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:44.283 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:44.283 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:44.283 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:44.283 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:44.542 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:44.542 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:44.542 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:44.542 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:44.542 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:44.800 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:45.057 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:45.057 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:45.316 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:45.316 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:45.316 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:45.316 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:45.316 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:45.574 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:45.574 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:45.574 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:45.574 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:45.832 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:46.091 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:46.091 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:46.091 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:46.091 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:46.349 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:46.349 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:46.607 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:46.607 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:46.607 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:46.607 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:46.607 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:46.607 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:46.607 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:46.867 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:47.127 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:47.127 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:47.127 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:47.127 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:47.127 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:47.127 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:47.127 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:47.386 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:47.386 [85/268] Linking static target lib/librte_eal.a 00:02:47.645 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:47.645 [87/268] Linking static target lib/librte_ring.a 00:02:47.645 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:47.645 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:47.645 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:47.645 [91/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:47.645 [92/268] Linking static target lib/librte_rcu.a 00:02:47.903 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:47.903 [94/268] Linking static target lib/librte_mempool.a 00:02:47.903 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:47.903 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:48.161 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:48.161 [98/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.161 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:48.161 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:48.161 [101/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.161 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:48.420 [103/268] Linking static target lib/librte_mbuf.a 00:02:48.677 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:48.677 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:48.677 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:48.677 [107/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:48.677 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:48.677 [109/268] Linking static target lib/librte_net.a 00:02:48.677 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:48.677 [111/268] Linking static target lib/librte_meter.a 00:02:49.243 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:49.243 [113/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.243 [114/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.243 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.243 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:49.243 [117/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.243 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:49.502 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:50.069 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:50.069 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:50.069 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:50.328 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:50.328 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:50.586 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:50.586 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:50.586 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:50.586 [128/268] Linking static target lib/librte_pci.a 00:02:50.586 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:50.586 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:50.586 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:50.586 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:50.844 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:50.844 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:50.844 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:50.844 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:50.844 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:50.844 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:50.844 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:50.845 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:50.845 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:50.845 [142/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.845 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:50.845 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:50.845 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:51.103 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:51.361 [147/268] Linking static target lib/librte_ethdev.a 00:02:51.619 [148/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:51.619 [149/268] Linking static target lib/librte_timer.a 00:02:51.619 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:51.619 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:51.619 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:51.619 [153/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:51.619 [154/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:51.619 [155/268] Linking static target lib/librte_cmdline.a 00:02:51.885 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:52.154 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:52.154 [158/268] Linking static target lib/librte_hash.a 00:02:52.154 [159/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.154 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:52.154 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:52.412 [162/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:52.412 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:52.412 [164/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:52.412 [165/268] Linking static target lib/librte_compressdev.a 00:02:52.983 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:52.983 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:52.983 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:52.983 [169/268] Linking static target lib/librte_dmadev.a 00:02:52.983 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:52.983 [171/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:52.983 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:52.983 [173/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:53.241 [174/268] Linking static target lib/librte_cryptodev.a 00:02:53.241 [175/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.241 [176/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.500 [177/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.500 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:53.758 [179/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:53.758 [180/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:53.758 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:53.758 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:53.758 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:53.758 [184/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.016 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:54.275 [186/268] Linking static target lib/librte_power.a 00:02:54.275 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:54.533 [188/268] Linking static target lib/librte_reorder.a 00:02:54.533 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:54.533 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:54.533 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:54.533 [192/268] Linking static target lib/librte_security.a 00:02:54.792 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:55.050 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.050 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:55.309 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:55.309 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.309 [198/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.567 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:55.567 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:55.567 [201/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.134 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:56.134 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:56.134 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:56.134 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:56.392 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:56.392 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:56.392 [208/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:56.392 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:56.392 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:56.650 [211/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:56.650 [212/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:56.650 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:56.650 [214/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:56.650 [215/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:56.650 [216/268] Linking static target drivers/librte_bus_pci.a 00:02:56.650 [217/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:56.650 [218/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:56.650 [219/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:56.908 [220/268] Linking static target drivers/librte_bus_vdev.a 00:02:56.908 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:56.908 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:57.205 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:57.205 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:57.205 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:57.205 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.205 [227/268] Linking static target drivers/librte_mempool_ring.a 00:02:57.205 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.773 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:57.773 [230/268] Linking static target lib/librte_vhost.a 00:02:58.709 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.709 [232/268] Linking target lib/librte_eal.so.24.1 00:02:58.709 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:58.709 [234/268] Linking target lib/librte_dmadev.so.24.1 00:02:58.709 [235/268] Linking target lib/librte_pci.so.24.1 00:02:58.709 [236/268] Linking target lib/librte_meter.so.24.1 00:02:58.709 [237/268] Linking target lib/librte_ring.so.24.1 00:02:58.709 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:58.709 [239/268] Linking target lib/librte_timer.so.24.1 00:02:58.967 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:58.967 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:58.967 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:58.967 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:58.967 [244/268] Linking target lib/librte_rcu.so.24.1 00:02:58.967 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:58.967 [246/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:58.967 [247/268] Linking target lib/librte_mempool.so.24.1 00:02:58.967 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:58.967 [249/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:59.226 [250/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:59.226 [251/268] Linking target lib/librte_mbuf.so.24.1 00:02:59.226 [252/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.226 [253/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.226 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:59.226 [255/268] Linking target lib/librte_reorder.so.24.1 00:02:59.226 [256/268] Linking target lib/librte_net.so.24.1 00:02:59.226 [257/268] Linking target lib/librte_compressdev.so.24.1 00:02:59.226 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:59.484 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:59.484 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:59.484 [261/268] Linking target lib/librte_cmdline.so.24.1 00:02:59.484 [262/268] Linking target lib/librte_hash.so.24.1 00:02:59.484 [263/268] Linking target lib/librte_security.so.24.1 00:02:59.484 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:59.742 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:59.742 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:59.742 [267/268] Linking target lib/librte_power.so.24.1 00:02:59.742 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:59.742 INFO: autodetecting backend as ninja 00:02:59.742 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:26.298 CC lib/log/log.o 00:03:26.298 CC lib/log/log_flags.o 00:03:26.298 CC lib/log/log_deprecated.o 00:03:26.298 CC lib/ut/ut.o 00:03:26.298 CC lib/ut_mock/mock.o 00:03:26.298 LIB libspdk_ut.a 00:03:26.298 LIB libspdk_log.a 00:03:26.298 LIB libspdk_ut_mock.a 00:03:26.298 SO libspdk_ut.so.2.0 00:03:26.298 SO libspdk_log.so.7.1 00:03:26.298 SO libspdk_ut_mock.so.6.0 00:03:26.298 SYMLINK libspdk_ut.so 00:03:26.298 SYMLINK libspdk_ut_mock.so 00:03:26.298 SYMLINK libspdk_log.so 00:03:26.298 CXX lib/trace_parser/trace.o 00:03:26.298 CC lib/util/base64.o 00:03:26.298 CC lib/util/bit_array.o 00:03:26.298 CC lib/util/cpuset.o 00:03:26.298 CC lib/util/crc32.o 00:03:26.298 CC lib/util/crc32c.o 00:03:26.298 CC lib/util/crc16.o 00:03:26.298 CC lib/dma/dma.o 00:03:26.298 CC lib/ioat/ioat.o 00:03:26.298 CC lib/vfio_user/host/vfio_user_pci.o 00:03:26.298 CC lib/util/crc32_ieee.o 00:03:26.298 CC lib/util/crc64.o 00:03:26.298 CC lib/util/dif.o 00:03:26.298 CC lib/util/fd.o 00:03:26.298 CC lib/util/fd_group.o 00:03:26.298 LIB libspdk_dma.a 00:03:26.298 CC lib/util/file.o 00:03:26.298 SO libspdk_dma.so.5.0 00:03:26.298 LIB libspdk_ioat.a 00:03:26.298 CC lib/vfio_user/host/vfio_user.o 00:03:26.298 SYMLINK libspdk_dma.so 00:03:26.298 CC lib/util/hexlify.o 00:03:26.298 CC lib/util/iov.o 00:03:26.298 SO libspdk_ioat.so.7.0 00:03:26.298 CC lib/util/math.o 00:03:26.298 CC lib/util/net.o 00:03:26.298 SYMLINK libspdk_ioat.so 00:03:26.298 CC lib/util/pipe.o 00:03:26.298 CC lib/util/strerror_tls.o 00:03:26.298 CC lib/util/string.o 00:03:26.298 CC lib/util/uuid.o 00:03:26.299 CC lib/util/xor.o 00:03:26.299 CC lib/util/zipf.o 00:03:26.299 LIB libspdk_vfio_user.a 00:03:26.299 SO libspdk_vfio_user.so.5.0 00:03:26.299 CC lib/util/md5.o 00:03:26.299 SYMLINK libspdk_vfio_user.so 00:03:26.299 LIB libspdk_util.a 00:03:26.299 SO libspdk_util.so.10.0 00:03:26.299 LIB libspdk_trace_parser.a 00:03:26.299 SO libspdk_trace_parser.so.6.0 00:03:26.299 SYMLINK libspdk_util.so 00:03:26.299 SYMLINK libspdk_trace_parser.so 00:03:26.299 CC lib/vmd/vmd.o 00:03:26.299 CC lib/rdma_provider/common.o 00:03:26.299 CC lib/vmd/led.o 00:03:26.299 CC lib/json/json_parse.o 00:03:26.299 CC lib/json/json_util.o 00:03:26.299 CC lib/conf/conf.o 00:03:26.299 CC lib/rdma_utils/rdma_utils.o 00:03:26.299 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:26.299 CC lib/idxd/idxd.o 00:03:26.299 CC lib/env_dpdk/env.o 00:03:26.299 CC lib/idxd/idxd_user.o 00:03:26.299 CC lib/idxd/idxd_kernel.o 00:03:26.299 LIB libspdk_rdma_provider.a 00:03:26.299 SO libspdk_rdma_provider.so.6.0 00:03:26.299 LIB libspdk_conf.a 00:03:26.299 CC lib/json/json_write.o 00:03:26.299 SO libspdk_conf.so.6.0 00:03:26.299 CC lib/env_dpdk/memory.o 00:03:26.299 SYMLINK libspdk_rdma_provider.so 00:03:26.299 CC lib/env_dpdk/pci.o 00:03:26.299 SYMLINK libspdk_conf.so 00:03:26.299 CC lib/env_dpdk/init.o 00:03:26.299 LIB libspdk_rdma_utils.a 00:03:26.299 CC lib/env_dpdk/threads.o 00:03:26.299 SO libspdk_rdma_utils.so.1.0 00:03:26.299 CC lib/env_dpdk/pci_ioat.o 00:03:26.299 SYMLINK libspdk_rdma_utils.so 00:03:26.299 CC lib/env_dpdk/pci_virtio.o 00:03:26.299 LIB libspdk_json.a 00:03:26.299 CC lib/env_dpdk/pci_vmd.o 00:03:26.299 SO libspdk_json.so.6.0 00:03:26.299 LIB libspdk_idxd.a 00:03:26.299 CC lib/env_dpdk/pci_idxd.o 00:03:26.299 CC lib/env_dpdk/pci_event.o 00:03:26.299 SO libspdk_idxd.so.12.1 00:03:26.299 SYMLINK libspdk_json.so 00:03:26.299 CC lib/env_dpdk/sigbus_handler.o 00:03:26.299 LIB libspdk_vmd.a 00:03:26.299 SYMLINK libspdk_idxd.so 00:03:26.299 CC lib/env_dpdk/pci_dpdk.o 00:03:26.299 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:26.299 SO libspdk_vmd.so.6.0 00:03:26.299 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:26.299 SYMLINK libspdk_vmd.so 00:03:26.558 CC lib/jsonrpc/jsonrpc_server.o 00:03:26.558 CC lib/jsonrpc/jsonrpc_client.o 00:03:26.558 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:26.558 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:26.817 LIB libspdk_jsonrpc.a 00:03:26.817 SO libspdk_jsonrpc.so.6.0 00:03:26.817 SYMLINK libspdk_jsonrpc.so 00:03:27.076 LIB libspdk_env_dpdk.a 00:03:27.076 CC lib/rpc/rpc.o 00:03:27.335 SO libspdk_env_dpdk.so.15.0 00:03:27.335 SYMLINK libspdk_env_dpdk.so 00:03:27.335 LIB libspdk_rpc.a 00:03:27.335 SO libspdk_rpc.so.6.0 00:03:27.594 SYMLINK libspdk_rpc.so 00:03:27.594 CC lib/notify/notify.o 00:03:27.594 CC lib/notify/notify_rpc.o 00:03:27.594 CC lib/trace/trace.o 00:03:27.594 CC lib/trace/trace_rpc.o 00:03:27.594 CC lib/trace/trace_flags.o 00:03:27.594 CC lib/keyring/keyring.o 00:03:27.594 CC lib/keyring/keyring_rpc.o 00:03:27.865 LIB libspdk_notify.a 00:03:27.865 SO libspdk_notify.so.6.0 00:03:28.138 SYMLINK libspdk_notify.so 00:03:28.138 LIB libspdk_keyring.a 00:03:28.138 LIB libspdk_trace.a 00:03:28.138 SO libspdk_keyring.so.2.0 00:03:28.138 SO libspdk_trace.so.11.0 00:03:28.138 SYMLINK libspdk_keyring.so 00:03:28.138 SYMLINK libspdk_trace.so 00:03:28.397 CC lib/thread/thread.o 00:03:28.397 CC lib/thread/iobuf.o 00:03:28.397 CC lib/sock/sock.o 00:03:28.397 CC lib/sock/sock_rpc.o 00:03:28.965 LIB libspdk_sock.a 00:03:28.965 SO libspdk_sock.so.10.0 00:03:28.965 SYMLINK libspdk_sock.so 00:03:29.224 CC lib/nvme/nvme_fabric.o 00:03:29.224 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:29.224 CC lib/nvme/nvme_ctrlr.o 00:03:29.224 CC lib/nvme/nvme_ns.o 00:03:29.224 CC lib/nvme/nvme_ns_cmd.o 00:03:29.224 CC lib/nvme/nvme_pcie_common.o 00:03:29.224 CC lib/nvme/nvme.o 00:03:29.224 CC lib/nvme/nvme_pcie.o 00:03:29.224 CC lib/nvme/nvme_qpair.o 00:03:30.158 LIB libspdk_thread.a 00:03:30.158 CC lib/nvme/nvme_quirks.o 00:03:30.158 SO libspdk_thread.so.10.2 00:03:30.158 CC lib/nvme/nvme_transport.o 00:03:30.158 CC lib/nvme/nvme_discovery.o 00:03:30.158 SYMLINK libspdk_thread.so 00:03:30.158 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:30.158 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:30.158 CC lib/nvme/nvme_tcp.o 00:03:30.416 CC lib/nvme/nvme_opal.o 00:03:30.416 CC lib/nvme/nvme_io_msg.o 00:03:30.416 CC lib/nvme/nvme_poll_group.o 00:03:30.982 CC lib/nvme/nvme_zns.o 00:03:30.982 CC lib/nvme/nvme_stubs.o 00:03:30.982 CC lib/accel/accel.o 00:03:30.982 CC lib/nvme/nvme_auth.o 00:03:30.982 CC lib/blob/blobstore.o 00:03:30.982 CC lib/nvme/nvme_cuse.o 00:03:30.982 CC lib/init/json_config.o 00:03:31.240 CC lib/init/subsystem.o 00:03:31.240 CC lib/blob/request.o 00:03:31.498 CC lib/init/subsystem_rpc.o 00:03:31.498 CC lib/blob/zeroes.o 00:03:31.498 CC lib/blob/blob_bs_dev.o 00:03:31.498 CC lib/init/rpc.o 00:03:31.498 CC lib/accel/accel_rpc.o 00:03:31.756 CC lib/virtio/virtio.o 00:03:31.756 CC lib/nvme/nvme_rdma.o 00:03:31.756 CC lib/accel/accel_sw.o 00:03:31.756 LIB libspdk_init.a 00:03:31.756 CC lib/virtio/virtio_vhost_user.o 00:03:31.756 SO libspdk_init.so.6.0 00:03:32.014 SYMLINK libspdk_init.so 00:03:32.014 CC lib/virtio/virtio_vfio_user.o 00:03:32.014 CC lib/virtio/virtio_pci.o 00:03:32.014 CC lib/fsdev/fsdev.o 00:03:32.014 CC lib/fsdev/fsdev_io.o 00:03:32.014 CC lib/fsdev/fsdev_rpc.o 00:03:32.014 LIB libspdk_accel.a 00:03:32.014 CC lib/event/app.o 00:03:32.273 CC lib/event/reactor.o 00:03:32.273 SO libspdk_accel.so.16.0 00:03:32.273 CC lib/event/log_rpc.o 00:03:32.273 CC lib/event/app_rpc.o 00:03:32.273 SYMLINK libspdk_accel.so 00:03:32.273 CC lib/event/scheduler_static.o 00:03:32.273 LIB libspdk_virtio.a 00:03:32.273 SO libspdk_virtio.so.7.0 00:03:32.273 SYMLINK libspdk_virtio.so 00:03:32.532 CC lib/bdev/bdev.o 00:03:32.532 CC lib/bdev/bdev_rpc.o 00:03:32.532 CC lib/bdev/bdev_zone.o 00:03:32.532 CC lib/bdev/scsi_nvme.o 00:03:32.532 CC lib/bdev/part.o 00:03:32.532 LIB libspdk_fsdev.a 00:03:32.532 LIB libspdk_event.a 00:03:32.791 SO libspdk_fsdev.so.1.0 00:03:32.791 SO libspdk_event.so.14.0 00:03:32.791 SYMLINK libspdk_fsdev.so 00:03:32.791 SYMLINK libspdk_event.so 00:03:33.048 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:33.048 LIB libspdk_nvme.a 00:03:33.306 SO libspdk_nvme.so.14.0 00:03:33.564 SYMLINK libspdk_nvme.so 00:03:33.564 LIB libspdk_fuse_dispatcher.a 00:03:33.564 SO libspdk_fuse_dispatcher.so.1.0 00:03:33.851 SYMLINK libspdk_fuse_dispatcher.so 00:03:34.110 LIB libspdk_blob.a 00:03:34.110 SO libspdk_blob.so.11.0 00:03:34.368 SYMLINK libspdk_blob.so 00:03:34.368 CC lib/blobfs/tree.o 00:03:34.368 CC lib/blobfs/blobfs.o 00:03:34.368 CC lib/lvol/lvol.o 00:03:35.304 LIB libspdk_blobfs.a 00:03:35.304 SO libspdk_blobfs.so.10.0 00:03:35.304 LIB libspdk_bdev.a 00:03:35.562 LIB libspdk_lvol.a 00:03:35.562 SYMLINK libspdk_blobfs.so 00:03:35.562 SO libspdk_bdev.so.17.0 00:03:35.562 SO libspdk_lvol.so.10.0 00:03:35.562 SYMLINK libspdk_lvol.so 00:03:35.562 SYMLINK libspdk_bdev.so 00:03:35.821 CC lib/ublk/ublk.o 00:03:35.821 CC lib/scsi/dev.o 00:03:35.821 CC lib/scsi/port.o 00:03:35.821 CC lib/ublk/ublk_rpc.o 00:03:35.821 CC lib/scsi/lun.o 00:03:35.821 CC lib/scsi/scsi.o 00:03:35.821 CC lib/nbd/nbd.o 00:03:35.821 CC lib/scsi/scsi_bdev.o 00:03:35.821 CC lib/ftl/ftl_core.o 00:03:35.821 CC lib/nvmf/ctrlr.o 00:03:36.079 CC lib/ftl/ftl_init.o 00:03:36.079 CC lib/ftl/ftl_layout.o 00:03:36.079 CC lib/nvmf/ctrlr_discovery.o 00:03:36.079 CC lib/nvmf/ctrlr_bdev.o 00:03:36.079 CC lib/nvmf/subsystem.o 00:03:36.079 CC lib/nvmf/nvmf.o 00:03:36.338 CC lib/nvmf/nvmf_rpc.o 00:03:36.338 CC lib/nbd/nbd_rpc.o 00:03:36.338 CC lib/ftl/ftl_debug.o 00:03:36.338 CC lib/scsi/scsi_pr.o 00:03:36.338 LIB libspdk_nbd.a 00:03:36.595 LIB libspdk_ublk.a 00:03:36.595 SO libspdk_nbd.so.7.0 00:03:36.595 SO libspdk_ublk.so.3.0 00:03:36.595 CC lib/nvmf/transport.o 00:03:36.595 SYMLINK libspdk_nbd.so 00:03:36.595 CC lib/nvmf/tcp.o 00:03:36.595 SYMLINK libspdk_ublk.so 00:03:36.595 CC lib/nvmf/stubs.o 00:03:36.595 CC lib/ftl/ftl_io.o 00:03:36.595 CC lib/scsi/scsi_rpc.o 00:03:36.853 CC lib/nvmf/mdns_server.o 00:03:36.853 CC lib/scsi/task.o 00:03:36.853 CC lib/ftl/ftl_sb.o 00:03:37.112 CC lib/nvmf/rdma.o 00:03:37.112 CC lib/ftl/ftl_l2p.o 00:03:37.112 LIB libspdk_scsi.a 00:03:37.112 CC lib/nvmf/auth.o 00:03:37.112 SO libspdk_scsi.so.9.0 00:03:37.112 CC lib/ftl/ftl_l2p_flat.o 00:03:37.112 CC lib/ftl/ftl_nv_cache.o 00:03:37.112 SYMLINK libspdk_scsi.so 00:03:37.112 CC lib/ftl/ftl_band.o 00:03:37.112 CC lib/ftl/ftl_band_ops.o 00:03:37.371 CC lib/ftl/ftl_writer.o 00:03:37.371 CC lib/ftl/ftl_rq.o 00:03:37.371 CC lib/iscsi/conn.o 00:03:37.631 CC lib/ftl/ftl_reloc.o 00:03:37.631 CC lib/iscsi/init_grp.o 00:03:37.631 CC lib/iscsi/iscsi.o 00:03:37.631 CC lib/ftl/ftl_l2p_cache.o 00:03:37.631 CC lib/iscsi/param.o 00:03:37.890 CC lib/iscsi/portal_grp.o 00:03:37.890 CC lib/iscsi/tgt_node.o 00:03:37.890 CC lib/iscsi/iscsi_subsystem.o 00:03:37.890 CC lib/iscsi/iscsi_rpc.o 00:03:38.150 CC lib/iscsi/task.o 00:03:38.150 CC lib/ftl/ftl_p2l.o 00:03:38.150 CC lib/ftl/ftl_p2l_log.o 00:03:38.150 CC lib/ftl/mngt/ftl_mngt.o 00:03:38.150 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:38.150 CC lib/vhost/vhost.o 00:03:38.411 CC lib/vhost/vhost_rpc.o 00:03:38.411 CC lib/vhost/vhost_scsi.o 00:03:38.411 CC lib/vhost/vhost_blk.o 00:03:38.411 CC lib/vhost/rte_vhost_user.o 00:03:38.411 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:38.411 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:38.676 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:38.676 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:38.676 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:38.968 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:38.968 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:38.968 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:38.968 LIB libspdk_iscsi.a 00:03:38.968 LIB libspdk_nvmf.a 00:03:38.968 SO libspdk_iscsi.so.8.0 00:03:38.968 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:39.228 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:39.228 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:39.228 SO libspdk_nvmf.so.19.1 00:03:39.228 CC lib/ftl/utils/ftl_conf.o 00:03:39.228 SYMLINK libspdk_iscsi.so 00:03:39.228 CC lib/ftl/utils/ftl_md.o 00:03:39.228 CC lib/ftl/utils/ftl_mempool.o 00:03:39.228 CC lib/ftl/utils/ftl_bitmap.o 00:03:39.228 CC lib/ftl/utils/ftl_property.o 00:03:39.487 SYMLINK libspdk_nvmf.so 00:03:39.487 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:39.487 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:39.487 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:39.487 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:39.487 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:39.487 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:39.487 LIB libspdk_vhost.a 00:03:39.487 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:39.745 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:39.745 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:39.745 SO libspdk_vhost.so.8.0 00:03:39.745 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:39.745 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:39.745 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:39.745 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:39.745 CC lib/ftl/base/ftl_base_dev.o 00:03:39.745 CC lib/ftl/base/ftl_base_bdev.o 00:03:39.745 SYMLINK libspdk_vhost.so 00:03:39.745 CC lib/ftl/ftl_trace.o 00:03:40.004 LIB libspdk_ftl.a 00:03:40.572 SO libspdk_ftl.so.9.0 00:03:40.830 SYMLINK libspdk_ftl.so 00:03:41.089 CC module/env_dpdk/env_dpdk_rpc.o 00:03:41.089 CC module/sock/uring/uring.o 00:03:41.089 CC module/scheduler/gscheduler/gscheduler.o 00:03:41.089 CC module/blob/bdev/blob_bdev.o 00:03:41.347 CC module/sock/posix/posix.o 00:03:41.347 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:41.347 CC module/accel/error/accel_error.o 00:03:41.347 CC module/keyring/file/keyring.o 00:03:41.347 CC module/fsdev/aio/fsdev_aio.o 00:03:41.347 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:41.347 LIB libspdk_env_dpdk_rpc.a 00:03:41.347 SO libspdk_env_dpdk_rpc.so.6.0 00:03:41.347 SYMLINK libspdk_env_dpdk_rpc.so 00:03:41.347 CC module/keyring/file/keyring_rpc.o 00:03:41.347 CC module/accel/error/accel_error_rpc.o 00:03:41.347 LIB libspdk_scheduler_gscheduler.a 00:03:41.347 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:41.347 LIB libspdk_scheduler_dpdk_governor.a 00:03:41.347 SO libspdk_scheduler_gscheduler.so.4.0 00:03:41.347 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:41.605 LIB libspdk_scheduler_dynamic.a 00:03:41.605 LIB libspdk_blob_bdev.a 00:03:41.605 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:41.605 SYMLINK libspdk_scheduler_gscheduler.so 00:03:41.605 SO libspdk_scheduler_dynamic.so.4.0 00:03:41.605 CC module/fsdev/aio/linux_aio_mgr.o 00:03:41.605 SO libspdk_blob_bdev.so.11.0 00:03:41.605 LIB libspdk_keyring_file.a 00:03:41.605 SO libspdk_keyring_file.so.2.0 00:03:41.605 LIB libspdk_accel_error.a 00:03:41.605 SYMLINK libspdk_scheduler_dynamic.so 00:03:41.605 SO libspdk_accel_error.so.2.0 00:03:41.605 SYMLINK libspdk_blob_bdev.so 00:03:41.605 SYMLINK libspdk_keyring_file.so 00:03:41.605 SYMLINK libspdk_accel_error.so 00:03:41.605 CC module/accel/ioat/accel_ioat.o 00:03:41.869 CC module/accel/dsa/accel_dsa.o 00:03:41.869 CC module/keyring/linux/keyring.o 00:03:41.869 CC module/accel/iaa/accel_iaa.o 00:03:41.869 CC module/bdev/delay/vbdev_delay.o 00:03:41.869 CC module/accel/ioat/accel_ioat_rpc.o 00:03:41.869 LIB libspdk_fsdev_aio.a 00:03:41.869 CC module/blobfs/bdev/blobfs_bdev.o 00:03:41.869 CC module/keyring/linux/keyring_rpc.o 00:03:41.869 CC module/bdev/error/vbdev_error.o 00:03:41.869 SO libspdk_fsdev_aio.so.1.0 00:03:42.128 LIB libspdk_sock_posix.a 00:03:42.128 LIB libspdk_sock_uring.a 00:03:42.128 SO libspdk_sock_posix.so.6.0 00:03:42.128 SO libspdk_sock_uring.so.5.0 00:03:42.128 SYMLINK libspdk_fsdev_aio.so 00:03:42.128 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:42.128 LIB libspdk_accel_ioat.a 00:03:42.128 SYMLINK libspdk_sock_uring.so 00:03:42.128 CC module/accel/iaa/accel_iaa_rpc.o 00:03:42.128 SYMLINK libspdk_sock_posix.so 00:03:42.128 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:42.128 SO libspdk_accel_ioat.so.6.0 00:03:42.128 LIB libspdk_keyring_linux.a 00:03:42.128 SO libspdk_keyring_linux.so.1.0 00:03:42.128 SYMLINK libspdk_accel_ioat.so 00:03:42.128 CC module/accel/dsa/accel_dsa_rpc.o 00:03:42.387 LIB libspdk_accel_iaa.a 00:03:42.387 SYMLINK libspdk_keyring_linux.so 00:03:42.387 CC module/bdev/error/vbdev_error_rpc.o 00:03:42.387 LIB libspdk_blobfs_bdev.a 00:03:42.387 SO libspdk_accel_iaa.so.3.0 00:03:42.387 SO libspdk_blobfs_bdev.so.6.0 00:03:42.387 CC module/bdev/gpt/gpt.o 00:03:42.387 SYMLINK libspdk_accel_iaa.so 00:03:42.387 LIB libspdk_accel_dsa.a 00:03:42.387 LIB libspdk_bdev_delay.a 00:03:42.387 SYMLINK libspdk_blobfs_bdev.so 00:03:42.387 SO libspdk_accel_dsa.so.5.0 00:03:42.387 SO libspdk_bdev_delay.so.6.0 00:03:42.387 CC module/bdev/malloc/bdev_malloc.o 00:03:42.387 CC module/bdev/lvol/vbdev_lvol.o 00:03:42.387 CC module/bdev/null/bdev_null.o 00:03:42.387 SYMLINK libspdk_accel_dsa.so 00:03:42.387 SYMLINK libspdk_bdev_delay.so 00:03:42.387 CC module/bdev/null/bdev_null_rpc.o 00:03:42.387 LIB libspdk_bdev_error.a 00:03:42.645 SO libspdk_bdev_error.so.6.0 00:03:42.645 CC module/bdev/nvme/bdev_nvme.o 00:03:42.645 CC module/bdev/passthru/vbdev_passthru.o 00:03:42.645 CC module/bdev/gpt/vbdev_gpt.o 00:03:42.645 CC module/bdev/raid/bdev_raid.o 00:03:42.645 SYMLINK libspdk_bdev_error.so 00:03:42.645 CC module/bdev/raid/bdev_raid_rpc.o 00:03:42.645 CC module/bdev/raid/bdev_raid_sb.o 00:03:42.645 CC module/bdev/split/vbdev_split.o 00:03:42.645 LIB libspdk_bdev_null.a 00:03:42.904 SO libspdk_bdev_null.so.6.0 00:03:42.904 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:42.904 LIB libspdk_bdev_gpt.a 00:03:42.904 SYMLINK libspdk_bdev_null.so 00:03:42.904 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:42.904 SO libspdk_bdev_gpt.so.6.0 00:03:42.904 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:42.904 SYMLINK libspdk_bdev_gpt.so 00:03:42.904 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:42.904 CC module/bdev/split/vbdev_split_rpc.o 00:03:42.904 LIB libspdk_bdev_malloc.a 00:03:42.904 CC module/bdev/raid/raid0.o 00:03:43.162 SO libspdk_bdev_malloc.so.6.0 00:03:43.162 LIB libspdk_bdev_passthru.a 00:03:43.162 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:43.162 CC module/bdev/uring/bdev_uring.o 00:03:43.162 SO libspdk_bdev_passthru.so.6.0 00:03:43.162 SYMLINK libspdk_bdev_malloc.so 00:03:43.162 SYMLINK libspdk_bdev_passthru.so 00:03:43.162 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:43.162 LIB libspdk_bdev_split.a 00:03:43.162 SO libspdk_bdev_split.so.6.0 00:03:43.420 CC module/bdev/aio/bdev_aio.o 00:03:43.420 SYMLINK libspdk_bdev_split.so 00:03:43.420 CC module/bdev/nvme/nvme_rpc.o 00:03:43.420 LIB libspdk_bdev_lvol.a 00:03:43.420 SO libspdk_bdev_lvol.so.6.0 00:03:43.420 LIB libspdk_bdev_zone_block.a 00:03:43.420 SO libspdk_bdev_zone_block.so.6.0 00:03:43.420 SYMLINK libspdk_bdev_lvol.so 00:03:43.420 CC module/bdev/raid/raid1.o 00:03:43.679 SYMLINK libspdk_bdev_zone_block.so 00:03:43.679 CC module/bdev/raid/concat.o 00:03:43.679 CC module/bdev/ftl/bdev_ftl.o 00:03:43.679 CC module/bdev/iscsi/bdev_iscsi.o 00:03:43.679 CC module/bdev/uring/bdev_uring_rpc.o 00:03:43.679 CC module/bdev/nvme/bdev_mdns_client.o 00:03:43.679 CC module/bdev/aio/bdev_aio_rpc.o 00:03:43.679 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:43.679 CC module/bdev/nvme/vbdev_opal.o 00:03:43.679 LIB libspdk_bdev_uring.a 00:03:43.938 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:43.938 SO libspdk_bdev_uring.so.6.0 00:03:43.938 LIB libspdk_bdev_aio.a 00:03:43.938 LIB libspdk_bdev_raid.a 00:03:43.938 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:43.938 SO libspdk_bdev_aio.so.6.0 00:03:43.938 SYMLINK libspdk_bdev_uring.so 00:03:43.938 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:43.938 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:43.938 LIB libspdk_bdev_ftl.a 00:03:43.938 SO libspdk_bdev_raid.so.6.0 00:03:43.938 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:43.938 SYMLINK libspdk_bdev_aio.so 00:03:43.938 SO libspdk_bdev_ftl.so.6.0 00:03:43.938 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:44.198 SYMLINK libspdk_bdev_raid.so 00:03:44.198 SYMLINK libspdk_bdev_ftl.so 00:03:44.198 LIB libspdk_bdev_iscsi.a 00:03:44.198 SO libspdk_bdev_iscsi.so.6.0 00:03:44.198 SYMLINK libspdk_bdev_iscsi.so 00:03:44.461 LIB libspdk_bdev_virtio.a 00:03:44.720 SO libspdk_bdev_virtio.so.6.0 00:03:44.720 SYMLINK libspdk_bdev_virtio.so 00:03:44.978 LIB libspdk_bdev_nvme.a 00:03:44.978 SO libspdk_bdev_nvme.so.7.0 00:03:45.237 SYMLINK libspdk_bdev_nvme.so 00:03:45.803 CC module/event/subsystems/vmd/vmd.o 00:03:45.803 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:45.803 CC module/event/subsystems/scheduler/scheduler.o 00:03:45.803 CC module/event/subsystems/iobuf/iobuf.o 00:03:45.803 CC module/event/subsystems/keyring/keyring.o 00:03:45.803 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:45.803 CC module/event/subsystems/sock/sock.o 00:03:45.803 CC module/event/subsystems/fsdev/fsdev.o 00:03:45.803 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:45.803 LIB libspdk_event_fsdev.a 00:03:45.803 LIB libspdk_event_scheduler.a 00:03:45.803 LIB libspdk_event_iobuf.a 00:03:45.803 LIB libspdk_event_vmd.a 00:03:45.803 LIB libspdk_event_keyring.a 00:03:45.803 LIB libspdk_event_sock.a 00:03:45.803 SO libspdk_event_fsdev.so.1.0 00:03:45.803 LIB libspdk_event_vhost_blk.a 00:03:45.803 SO libspdk_event_scheduler.so.4.0 00:03:45.803 SO libspdk_event_iobuf.so.3.0 00:03:45.803 SO libspdk_event_keyring.so.1.0 00:03:45.803 SO libspdk_event_vmd.so.6.0 00:03:45.803 SO libspdk_event_sock.so.5.0 00:03:45.803 SO libspdk_event_vhost_blk.so.3.0 00:03:46.060 SYMLINK libspdk_event_scheduler.so 00:03:46.060 SYMLINK libspdk_event_fsdev.so 00:03:46.060 SYMLINK libspdk_event_keyring.so 00:03:46.060 SYMLINK libspdk_event_iobuf.so 00:03:46.060 SYMLINK libspdk_event_sock.so 00:03:46.060 SYMLINK libspdk_event_vmd.so 00:03:46.060 SYMLINK libspdk_event_vhost_blk.so 00:03:46.318 CC module/event/subsystems/accel/accel.o 00:03:46.318 LIB libspdk_event_accel.a 00:03:46.576 SO libspdk_event_accel.so.6.0 00:03:46.576 SYMLINK libspdk_event_accel.so 00:03:46.835 CC module/event/subsystems/bdev/bdev.o 00:03:47.093 LIB libspdk_event_bdev.a 00:03:47.093 SO libspdk_event_bdev.so.6.0 00:03:47.093 SYMLINK libspdk_event_bdev.so 00:03:47.352 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:47.352 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:47.352 CC module/event/subsystems/nbd/nbd.o 00:03:47.352 CC module/event/subsystems/scsi/scsi.o 00:03:47.352 CC module/event/subsystems/ublk/ublk.o 00:03:47.611 LIB libspdk_event_nbd.a 00:03:47.611 LIB libspdk_event_ublk.a 00:03:47.611 LIB libspdk_event_scsi.a 00:03:47.611 SO libspdk_event_nbd.so.6.0 00:03:47.611 SO libspdk_event_ublk.so.3.0 00:03:47.611 SO libspdk_event_scsi.so.6.0 00:03:47.611 SYMLINK libspdk_event_nbd.so 00:03:47.611 SYMLINK libspdk_event_ublk.so 00:03:47.611 LIB libspdk_event_nvmf.a 00:03:47.611 SYMLINK libspdk_event_scsi.so 00:03:47.611 SO libspdk_event_nvmf.so.6.0 00:03:47.870 SYMLINK libspdk_event_nvmf.so 00:03:47.870 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:47.870 CC module/event/subsystems/iscsi/iscsi.o 00:03:48.129 LIB libspdk_event_vhost_scsi.a 00:03:48.129 SO libspdk_event_vhost_scsi.so.3.0 00:03:48.129 LIB libspdk_event_iscsi.a 00:03:48.129 SO libspdk_event_iscsi.so.6.0 00:03:48.129 SYMLINK libspdk_event_vhost_scsi.so 00:03:48.388 SYMLINK libspdk_event_iscsi.so 00:03:48.388 SO libspdk.so.6.0 00:03:48.388 SYMLINK libspdk.so 00:03:48.646 CC app/trace_record/trace_record.o 00:03:48.646 CC app/spdk_lspci/spdk_lspci.o 00:03:48.646 CXX app/trace/trace.o 00:03:48.646 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:48.646 CC app/iscsi_tgt/iscsi_tgt.o 00:03:48.646 CC app/nvmf_tgt/nvmf_main.o 00:03:48.905 CC app/spdk_tgt/spdk_tgt.o 00:03:48.905 CC examples/ioat/perf/perf.o 00:03:48.905 CC examples/util/zipf/zipf.o 00:03:48.905 CC test/thread/poller_perf/poller_perf.o 00:03:48.905 LINK spdk_lspci 00:03:48.905 LINK interrupt_tgt 00:03:48.905 LINK nvmf_tgt 00:03:48.905 LINK zipf 00:03:48.905 LINK spdk_trace_record 00:03:49.163 LINK iscsi_tgt 00:03:49.163 LINK poller_perf 00:03:49.163 LINK spdk_tgt 00:03:49.163 LINK ioat_perf 00:03:49.163 CC app/spdk_nvme_perf/perf.o 00:03:49.163 LINK spdk_trace 00:03:49.163 CC app/spdk_nvme_identify/identify.o 00:03:49.422 TEST_HEADER include/spdk/accel.h 00:03:49.422 TEST_HEADER include/spdk/accel_module.h 00:03:49.422 TEST_HEADER include/spdk/assert.h 00:03:49.422 TEST_HEADER include/spdk/barrier.h 00:03:49.422 TEST_HEADER include/spdk/base64.h 00:03:49.422 TEST_HEADER include/spdk/bdev.h 00:03:49.422 TEST_HEADER include/spdk/bdev_module.h 00:03:49.422 CC app/spdk_nvme_discover/discovery_aer.o 00:03:49.422 TEST_HEADER include/spdk/bdev_zone.h 00:03:49.422 TEST_HEADER include/spdk/bit_array.h 00:03:49.422 TEST_HEADER include/spdk/bit_pool.h 00:03:49.422 TEST_HEADER include/spdk/blob_bdev.h 00:03:49.422 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:49.422 TEST_HEADER include/spdk/blobfs.h 00:03:49.422 TEST_HEADER include/spdk/blob.h 00:03:49.422 TEST_HEADER include/spdk/conf.h 00:03:49.422 TEST_HEADER include/spdk/config.h 00:03:49.422 TEST_HEADER include/spdk/cpuset.h 00:03:49.422 TEST_HEADER include/spdk/crc16.h 00:03:49.422 TEST_HEADER include/spdk/crc32.h 00:03:49.422 TEST_HEADER include/spdk/crc64.h 00:03:49.422 TEST_HEADER include/spdk/dif.h 00:03:49.422 CC examples/ioat/verify/verify.o 00:03:49.422 TEST_HEADER include/spdk/dma.h 00:03:49.422 TEST_HEADER include/spdk/endian.h 00:03:49.422 TEST_HEADER include/spdk/env_dpdk.h 00:03:49.422 CC app/spdk_top/spdk_top.o 00:03:49.422 TEST_HEADER include/spdk/env.h 00:03:49.422 TEST_HEADER include/spdk/event.h 00:03:49.422 TEST_HEADER include/spdk/fd_group.h 00:03:49.422 TEST_HEADER include/spdk/fd.h 00:03:49.422 TEST_HEADER include/spdk/file.h 00:03:49.422 TEST_HEADER include/spdk/fsdev.h 00:03:49.422 TEST_HEADER include/spdk/fsdev_module.h 00:03:49.422 TEST_HEADER include/spdk/ftl.h 00:03:49.422 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:49.422 TEST_HEADER include/spdk/gpt_spec.h 00:03:49.422 TEST_HEADER include/spdk/hexlify.h 00:03:49.422 TEST_HEADER include/spdk/histogram_data.h 00:03:49.422 TEST_HEADER include/spdk/idxd.h 00:03:49.423 TEST_HEADER include/spdk/idxd_spec.h 00:03:49.423 TEST_HEADER include/spdk/init.h 00:03:49.423 TEST_HEADER include/spdk/ioat.h 00:03:49.423 TEST_HEADER include/spdk/ioat_spec.h 00:03:49.423 TEST_HEADER include/spdk/iscsi_spec.h 00:03:49.423 TEST_HEADER include/spdk/json.h 00:03:49.423 TEST_HEADER include/spdk/jsonrpc.h 00:03:49.423 TEST_HEADER include/spdk/keyring.h 00:03:49.423 CC test/dma/test_dma/test_dma.o 00:03:49.423 TEST_HEADER include/spdk/keyring_module.h 00:03:49.423 TEST_HEADER include/spdk/likely.h 00:03:49.423 TEST_HEADER include/spdk/log.h 00:03:49.423 TEST_HEADER include/spdk/lvol.h 00:03:49.423 TEST_HEADER include/spdk/md5.h 00:03:49.423 TEST_HEADER include/spdk/memory.h 00:03:49.423 TEST_HEADER include/spdk/mmio.h 00:03:49.423 CC test/app/bdev_svc/bdev_svc.o 00:03:49.423 TEST_HEADER include/spdk/nbd.h 00:03:49.423 TEST_HEADER include/spdk/net.h 00:03:49.423 TEST_HEADER include/spdk/notify.h 00:03:49.423 TEST_HEADER include/spdk/nvme.h 00:03:49.423 TEST_HEADER include/spdk/nvme_intel.h 00:03:49.423 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:49.423 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:49.423 TEST_HEADER include/spdk/nvme_spec.h 00:03:49.423 TEST_HEADER include/spdk/nvme_zns.h 00:03:49.423 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:49.423 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:49.423 TEST_HEADER include/spdk/nvmf.h 00:03:49.423 TEST_HEADER include/spdk/nvmf_spec.h 00:03:49.423 TEST_HEADER include/spdk/nvmf_transport.h 00:03:49.423 CC examples/thread/thread/thread_ex.o 00:03:49.423 TEST_HEADER include/spdk/opal.h 00:03:49.423 TEST_HEADER include/spdk/opal_spec.h 00:03:49.423 TEST_HEADER include/spdk/pci_ids.h 00:03:49.423 TEST_HEADER include/spdk/pipe.h 00:03:49.423 TEST_HEADER include/spdk/queue.h 00:03:49.423 TEST_HEADER include/spdk/reduce.h 00:03:49.423 TEST_HEADER include/spdk/rpc.h 00:03:49.423 TEST_HEADER include/spdk/scheduler.h 00:03:49.423 TEST_HEADER include/spdk/scsi.h 00:03:49.423 TEST_HEADER include/spdk/scsi_spec.h 00:03:49.681 CC app/spdk_dd/spdk_dd.o 00:03:49.681 TEST_HEADER include/spdk/sock.h 00:03:49.681 LINK spdk_nvme_discover 00:03:49.681 TEST_HEADER include/spdk/stdinc.h 00:03:49.681 TEST_HEADER include/spdk/string.h 00:03:49.681 TEST_HEADER include/spdk/thread.h 00:03:49.681 TEST_HEADER include/spdk/trace.h 00:03:49.681 TEST_HEADER include/spdk/trace_parser.h 00:03:49.681 TEST_HEADER include/spdk/tree.h 00:03:49.681 TEST_HEADER include/spdk/ublk.h 00:03:49.681 TEST_HEADER include/spdk/util.h 00:03:49.681 TEST_HEADER include/spdk/uuid.h 00:03:49.681 TEST_HEADER include/spdk/version.h 00:03:49.681 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:49.681 LINK verify 00:03:49.681 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:49.681 TEST_HEADER include/spdk/vhost.h 00:03:49.681 TEST_HEADER include/spdk/vmd.h 00:03:49.681 TEST_HEADER include/spdk/xor.h 00:03:49.681 TEST_HEADER include/spdk/zipf.h 00:03:49.681 CXX test/cpp_headers/accel.o 00:03:49.681 LINK bdev_svc 00:03:49.681 CXX test/cpp_headers/accel_module.o 00:03:49.681 LINK thread 00:03:49.940 CC app/fio/nvme/fio_plugin.o 00:03:49.940 CXX test/cpp_headers/assert.o 00:03:49.940 LINK test_dma 00:03:49.940 CC app/fio/bdev/fio_plugin.o 00:03:49.940 LINK spdk_dd 00:03:49.940 LINK spdk_nvme_perf 00:03:50.198 LINK spdk_nvme_identify 00:03:50.198 CXX test/cpp_headers/barrier.o 00:03:50.198 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:50.198 CC examples/sock/hello_world/hello_sock.o 00:03:50.198 CXX test/cpp_headers/base64.o 00:03:50.198 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:50.198 LINK spdk_top 00:03:50.198 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:50.456 CC examples/vmd/lsvmd/lsvmd.o 00:03:50.456 LINK hello_sock 00:03:50.457 CC app/vhost/vhost.o 00:03:50.457 CXX test/cpp_headers/bdev.o 00:03:50.457 CXX test/cpp_headers/bdev_module.o 00:03:50.457 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:50.457 LINK spdk_nvme 00:03:50.457 LINK spdk_bdev 00:03:50.457 LINK nvme_fuzz 00:03:50.457 LINK lsvmd 00:03:50.715 LINK vhost 00:03:50.715 CXX test/cpp_headers/bdev_zone.o 00:03:50.715 CC examples/vmd/led/led.o 00:03:50.715 CC test/app/histogram_perf/histogram_perf.o 00:03:50.715 CC examples/idxd/perf/perf.o 00:03:50.974 CC test/env/mem_callbacks/mem_callbacks.o 00:03:50.974 LINK vhost_fuzz 00:03:50.974 LINK led 00:03:50.974 CXX test/cpp_headers/bit_array.o 00:03:50.974 CC examples/accel/perf/accel_perf.o 00:03:50.974 LINK histogram_perf 00:03:50.974 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:50.974 CXX test/cpp_headers/bit_pool.o 00:03:50.974 CC examples/blob/hello_world/hello_blob.o 00:03:51.232 LINK idxd_perf 00:03:51.232 CC examples/blob/cli/blobcli.o 00:03:51.232 LINK hello_fsdev 00:03:51.232 CXX test/cpp_headers/blob_bdev.o 00:03:51.232 CC test/event/event_perf/event_perf.o 00:03:51.232 CXX test/cpp_headers/blobfs_bdev.o 00:03:51.232 CC examples/nvme/hello_world/hello_world.o 00:03:51.232 LINK hello_blob 00:03:51.491 LINK accel_perf 00:03:51.491 LINK event_perf 00:03:51.491 CXX test/cpp_headers/blobfs.o 00:03:51.491 LINK mem_callbacks 00:03:51.491 LINK hello_world 00:03:51.491 CC examples/nvme/reconnect/reconnect.o 00:03:51.491 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:51.491 CXX test/cpp_headers/blob.o 00:03:51.491 CC examples/nvme/arbitration/arbitration.o 00:03:51.491 CC test/event/reactor/reactor.o 00:03:51.749 CC examples/nvme/hotplug/hotplug.o 00:03:51.749 LINK blobcli 00:03:51.749 CC test/env/vtophys/vtophys.o 00:03:51.749 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:51.749 LINK reactor 00:03:51.749 CXX test/cpp_headers/conf.o 00:03:51.749 CXX test/cpp_headers/config.o 00:03:51.749 LINK vtophys 00:03:51.749 LINK reconnect 00:03:52.008 LINK iscsi_fuzz 00:03:52.008 LINK env_dpdk_post_init 00:03:52.008 LINK hotplug 00:03:52.008 LINK arbitration 00:03:52.008 CXX test/cpp_headers/cpuset.o 00:03:52.008 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:52.008 CC test/event/reactor_perf/reactor_perf.o 00:03:52.008 CXX test/cpp_headers/crc16.o 00:03:52.008 CXX test/cpp_headers/crc32.o 00:03:52.008 LINK nvme_manage 00:03:52.266 CC test/env/memory/memory_ut.o 00:03:52.266 LINK cmb_copy 00:03:52.266 CC test/app/jsoncat/jsoncat.o 00:03:52.266 CC test/app/stub/stub.o 00:03:52.266 CC test/env/pci/pci_ut.o 00:03:52.266 CXX test/cpp_headers/crc64.o 00:03:52.266 LINK reactor_perf 00:03:52.266 CC examples/bdev/hello_world/hello_bdev.o 00:03:52.266 CC examples/nvme/abort/abort.o 00:03:52.266 LINK jsoncat 00:03:52.266 CC test/nvme/aer/aer.o 00:03:52.524 LINK stub 00:03:52.524 CXX test/cpp_headers/dif.o 00:03:52.524 CC test/nvme/reset/reset.o 00:03:52.524 CC test/event/app_repeat/app_repeat.o 00:03:52.524 LINK hello_bdev 00:03:52.524 CC test/nvme/sgl/sgl.o 00:03:52.524 CXX test/cpp_headers/dma.o 00:03:52.524 LINK pci_ut 00:03:52.782 CC test/nvme/e2edp/nvme_dp.o 00:03:52.782 LINK aer 00:03:52.782 LINK abort 00:03:52.782 LINK app_repeat 00:03:52.782 LINK reset 00:03:52.782 CXX test/cpp_headers/endian.o 00:03:52.782 LINK sgl 00:03:53.039 CC examples/bdev/bdevperf/bdevperf.o 00:03:53.039 CC test/rpc_client/rpc_client_test.o 00:03:53.039 CXX test/cpp_headers/env_dpdk.o 00:03:53.039 LINK nvme_dp 00:03:53.039 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:53.039 CC test/nvme/err_injection/err_injection.o 00:03:53.039 CC test/nvme/overhead/overhead.o 00:03:53.039 CC test/event/scheduler/scheduler.o 00:03:53.039 CXX test/cpp_headers/env.o 00:03:53.039 CC test/nvme/startup/startup.o 00:03:53.296 LINK rpc_client_test 00:03:53.296 LINK pmr_persistence 00:03:53.296 CC test/nvme/reserve/reserve.o 00:03:53.296 LINK err_injection 00:03:53.296 LINK scheduler 00:03:53.296 CXX test/cpp_headers/event.o 00:03:53.296 LINK startup 00:03:53.296 CXX test/cpp_headers/fd_group.o 00:03:53.296 LINK overhead 00:03:53.296 CXX test/cpp_headers/fd.o 00:03:53.296 LINK memory_ut 00:03:53.555 LINK reserve 00:03:53.555 CXX test/cpp_headers/file.o 00:03:53.555 CXX test/cpp_headers/fsdev.o 00:03:53.555 CC test/nvme/simple_copy/simple_copy.o 00:03:53.555 CC test/nvme/connect_stress/connect_stress.o 00:03:53.555 CXX test/cpp_headers/fsdev_module.o 00:03:53.555 CC test/nvme/boot_partition/boot_partition.o 00:03:53.814 CC test/nvme/compliance/nvme_compliance.o 00:03:53.814 CC test/accel/dif/dif.o 00:03:53.814 CC test/blobfs/mkfs/mkfs.o 00:03:53.814 LINK bdevperf 00:03:53.814 CC test/lvol/esnap/esnap.o 00:03:53.814 LINK connect_stress 00:03:53.814 CXX test/cpp_headers/ftl.o 00:03:53.814 LINK boot_partition 00:03:53.814 LINK simple_copy 00:03:53.814 CC test/nvme/fused_ordering/fused_ordering.o 00:03:54.073 LINK mkfs 00:03:54.073 LINK nvme_compliance 00:03:54.073 CXX test/cpp_headers/fuse_dispatcher.o 00:03:54.073 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:54.073 LINK fused_ordering 00:03:54.073 CC test/nvme/fdp/fdp.o 00:03:54.073 CC test/nvme/cuse/cuse.o 00:03:54.073 CXX test/cpp_headers/gpt_spec.o 00:03:54.073 CXX test/cpp_headers/hexlify.o 00:03:54.073 CC examples/nvmf/nvmf/nvmf.o 00:03:54.331 CXX test/cpp_headers/histogram_data.o 00:03:54.331 CXX test/cpp_headers/idxd.o 00:03:54.331 LINK doorbell_aers 00:03:54.331 CXX test/cpp_headers/idxd_spec.o 00:03:54.331 CXX test/cpp_headers/init.o 00:03:54.331 LINK dif 00:03:54.331 CXX test/cpp_headers/ioat.o 00:03:54.331 CXX test/cpp_headers/ioat_spec.o 00:03:54.331 LINK fdp 00:03:54.331 CXX test/cpp_headers/iscsi_spec.o 00:03:54.589 CXX test/cpp_headers/json.o 00:03:54.589 LINK nvmf 00:03:54.589 CXX test/cpp_headers/jsonrpc.o 00:03:54.589 CXX test/cpp_headers/keyring.o 00:03:54.589 CXX test/cpp_headers/keyring_module.o 00:03:54.589 CXX test/cpp_headers/likely.o 00:03:54.589 CXX test/cpp_headers/log.o 00:03:54.589 CXX test/cpp_headers/lvol.o 00:03:54.589 CXX test/cpp_headers/md5.o 00:03:54.589 CXX test/cpp_headers/memory.o 00:03:54.847 CXX test/cpp_headers/mmio.o 00:03:54.847 CXX test/cpp_headers/nbd.o 00:03:54.847 CXX test/cpp_headers/net.o 00:03:54.847 CXX test/cpp_headers/notify.o 00:03:54.847 CC test/bdev/bdevio/bdevio.o 00:03:54.847 CXX test/cpp_headers/nvme.o 00:03:54.847 CXX test/cpp_headers/nvme_intel.o 00:03:54.847 CXX test/cpp_headers/nvme_ocssd.o 00:03:54.847 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:54.847 CXX test/cpp_headers/nvme_spec.o 00:03:54.847 CXX test/cpp_headers/nvme_zns.o 00:03:54.847 CXX test/cpp_headers/nvmf_cmd.o 00:03:55.105 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:55.105 CXX test/cpp_headers/nvmf.o 00:03:55.105 CXX test/cpp_headers/nvmf_spec.o 00:03:55.105 CXX test/cpp_headers/nvmf_transport.o 00:03:55.105 CXX test/cpp_headers/opal.o 00:03:55.105 CXX test/cpp_headers/opal_spec.o 00:03:55.105 CXX test/cpp_headers/pci_ids.o 00:03:55.105 LINK bdevio 00:03:55.105 CXX test/cpp_headers/pipe.o 00:03:55.105 CXX test/cpp_headers/queue.o 00:03:55.363 CXX test/cpp_headers/reduce.o 00:03:55.363 CXX test/cpp_headers/rpc.o 00:03:55.363 CXX test/cpp_headers/scheduler.o 00:03:55.363 CXX test/cpp_headers/scsi.o 00:03:55.363 CXX test/cpp_headers/scsi_spec.o 00:03:55.363 CXX test/cpp_headers/sock.o 00:03:55.363 CXX test/cpp_headers/stdinc.o 00:03:55.363 CXX test/cpp_headers/string.o 00:03:55.363 CXX test/cpp_headers/thread.o 00:03:55.363 CXX test/cpp_headers/trace.o 00:03:55.622 CXX test/cpp_headers/trace_parser.o 00:03:55.622 CXX test/cpp_headers/tree.o 00:03:55.622 CXX test/cpp_headers/ublk.o 00:03:55.622 CXX test/cpp_headers/util.o 00:03:55.622 CXX test/cpp_headers/uuid.o 00:03:55.622 CXX test/cpp_headers/version.o 00:03:55.622 CXX test/cpp_headers/vfio_user_pci.o 00:03:55.622 LINK cuse 00:03:55.622 CXX test/cpp_headers/vfio_user_spec.o 00:03:55.622 CXX test/cpp_headers/vhost.o 00:03:55.622 CXX test/cpp_headers/vmd.o 00:03:55.622 CXX test/cpp_headers/xor.o 00:03:55.622 CXX test/cpp_headers/zipf.o 00:03:58.946 LINK esnap 00:03:59.205 00:03:59.205 real 1m30.174s 00:03:59.205 user 8m8.458s 00:03:59.205 sys 1m39.726s 00:03:59.205 09:18:23 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:59.205 ************************************ 00:03:59.205 END TEST make 00:03:59.205 ************************************ 00:03:59.205 09:18:23 make -- common/autotest_common.sh@10 -- $ set +x 00:03:59.205 09:18:23 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:59.205 09:18:23 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:59.205 09:18:23 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:59.205 09:18:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:59.205 09:18:23 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:59.205 09:18:23 -- pm/common@44 -- $ pid=5400 00:03:59.205 09:18:23 -- pm/common@50 -- $ kill -TERM 5400 00:03:59.205 09:18:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:59.205 09:18:23 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:59.205 09:18:23 -- pm/common@44 -- $ pid=5402 00:03:59.205 09:18:23 -- pm/common@50 -- $ kill -TERM 5402 00:03:59.205 09:18:23 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:59.205 09:18:23 -- common/autotest_common.sh@1691 -- # lcov --version 00:03:59.205 09:18:23 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:59.524 09:18:23 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:59.524 09:18:23 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:59.524 09:18:23 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:59.524 09:18:23 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:59.524 09:18:23 -- scripts/common.sh@336 -- # IFS=.-: 00:03:59.524 09:18:23 -- scripts/common.sh@336 -- # read -ra ver1 00:03:59.524 09:18:23 -- scripts/common.sh@337 -- # IFS=.-: 00:03:59.524 09:18:23 -- scripts/common.sh@337 -- # read -ra ver2 00:03:59.524 09:18:23 -- scripts/common.sh@338 -- # local 'op=<' 00:03:59.524 09:18:23 -- scripts/common.sh@340 -- # ver1_l=2 00:03:59.524 09:18:23 -- scripts/common.sh@341 -- # ver2_l=1 00:03:59.524 09:18:23 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:59.524 09:18:23 -- scripts/common.sh@344 -- # case "$op" in 00:03:59.524 09:18:23 -- scripts/common.sh@345 -- # : 1 00:03:59.524 09:18:23 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:59.524 09:18:23 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:59.524 09:18:23 -- scripts/common.sh@365 -- # decimal 1 00:03:59.524 09:18:23 -- scripts/common.sh@353 -- # local d=1 00:03:59.524 09:18:23 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:59.524 09:18:23 -- scripts/common.sh@355 -- # echo 1 00:03:59.524 09:18:23 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:59.524 09:18:23 -- scripts/common.sh@366 -- # decimal 2 00:03:59.524 09:18:23 -- scripts/common.sh@353 -- # local d=2 00:03:59.524 09:18:23 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:59.524 09:18:23 -- scripts/common.sh@355 -- # echo 2 00:03:59.524 09:18:23 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:59.524 09:18:23 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:59.524 09:18:23 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:59.524 09:18:23 -- scripts/common.sh@368 -- # return 0 00:03:59.524 09:18:23 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:59.524 09:18:23 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:59.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.524 --rc genhtml_branch_coverage=1 00:03:59.524 --rc genhtml_function_coverage=1 00:03:59.524 --rc genhtml_legend=1 00:03:59.524 --rc geninfo_all_blocks=1 00:03:59.524 --rc geninfo_unexecuted_blocks=1 00:03:59.524 00:03:59.524 ' 00:03:59.524 09:18:23 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:59.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.524 --rc genhtml_branch_coverage=1 00:03:59.524 --rc genhtml_function_coverage=1 00:03:59.524 --rc genhtml_legend=1 00:03:59.524 --rc geninfo_all_blocks=1 00:03:59.524 --rc geninfo_unexecuted_blocks=1 00:03:59.524 00:03:59.524 ' 00:03:59.524 09:18:23 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:59.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.524 --rc genhtml_branch_coverage=1 00:03:59.524 --rc genhtml_function_coverage=1 00:03:59.524 --rc genhtml_legend=1 00:03:59.524 --rc geninfo_all_blocks=1 00:03:59.524 --rc geninfo_unexecuted_blocks=1 00:03:59.524 00:03:59.524 ' 00:03:59.524 09:18:23 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:59.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.524 --rc genhtml_branch_coverage=1 00:03:59.524 --rc genhtml_function_coverage=1 00:03:59.524 --rc genhtml_legend=1 00:03:59.524 --rc geninfo_all_blocks=1 00:03:59.524 --rc geninfo_unexecuted_blocks=1 00:03:59.524 00:03:59.524 ' 00:03:59.524 09:18:23 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:59.524 09:18:23 -- nvmf/common.sh@7 -- # uname -s 00:03:59.524 09:18:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:59.524 09:18:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:59.524 09:18:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:59.524 09:18:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:59.524 09:18:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:59.524 09:18:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:59.524 09:18:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:59.524 09:18:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:59.524 09:18:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:59.524 09:18:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:59.524 09:18:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:03:59.524 09:18:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=5989d9e2-d339-420e-a2f4-bd87604f111f 00:03:59.524 09:18:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:59.524 09:18:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:59.524 09:18:23 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:59.524 09:18:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:59.524 09:18:23 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:59.524 09:18:23 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:59.524 09:18:23 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:59.524 09:18:23 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:59.524 09:18:23 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:59.524 09:18:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:59.524 09:18:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:59.524 09:18:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:59.524 09:18:23 -- paths/export.sh@5 -- # export PATH 00:03:59.524 09:18:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:59.524 09:18:23 -- nvmf/common.sh@51 -- # : 0 00:03:59.524 09:18:23 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:59.524 09:18:23 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:59.524 09:18:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:59.524 09:18:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:59.524 09:18:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:59.524 09:18:23 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:59.524 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:59.524 09:18:23 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:59.525 09:18:23 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:59.525 09:18:23 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:59.525 09:18:23 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:59.525 09:18:23 -- spdk/autotest.sh@32 -- # uname -s 00:03:59.525 09:18:23 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:59.525 09:18:23 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:59.525 09:18:23 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:59.525 09:18:23 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:59.525 09:18:23 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:59.525 09:18:23 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:59.525 09:18:23 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:59.525 09:18:23 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:59.525 09:18:23 -- spdk/autotest.sh@48 -- # udevadm_pid=54493 00:03:59.525 09:18:23 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:59.525 09:18:23 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:59.525 09:18:23 -- pm/common@17 -- # local monitor 00:03:59.525 09:18:23 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:59.525 09:18:23 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:59.525 09:18:23 -- pm/common@25 -- # sleep 1 00:03:59.525 09:18:23 -- pm/common@21 -- # date +%s 00:03:59.525 09:18:23 -- pm/common@21 -- # date +%s 00:03:59.525 09:18:23 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1729070303 00:03:59.525 09:18:23 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1729070303 00:03:59.525 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1729070303_collect-vmstat.pm.log 00:03:59.525 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1729070303_collect-cpu-load.pm.log 00:04:00.456 09:18:24 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:00.456 09:18:24 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:00.456 09:18:24 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:00.456 09:18:24 -- common/autotest_common.sh@10 -- # set +x 00:04:00.456 09:18:24 -- spdk/autotest.sh@59 -- # create_test_list 00:04:00.456 09:18:24 -- common/autotest_common.sh@748 -- # xtrace_disable 00:04:00.456 09:18:24 -- common/autotest_common.sh@10 -- # set +x 00:04:00.456 09:18:24 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:00.456 09:18:24 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:00.456 09:18:24 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:00.456 09:18:24 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:00.456 09:18:24 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:00.456 09:18:24 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:00.456 09:18:24 -- common/autotest_common.sh@1455 -- # uname 00:04:00.456 09:18:24 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:00.456 09:18:24 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:00.456 09:18:24 -- common/autotest_common.sh@1475 -- # uname 00:04:00.456 09:18:24 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:00.456 09:18:24 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:00.456 09:18:24 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:00.714 lcov: LCOV version 1.15 00:04:00.714 09:18:24 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:18.828 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:18.828 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:36.914 09:18:58 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:36.914 09:18:58 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:36.914 09:18:58 -- common/autotest_common.sh@10 -- # set +x 00:04:36.914 09:18:58 -- spdk/autotest.sh@78 -- # rm -f 00:04:36.914 09:18:58 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:36.914 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:36.914 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:36.914 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:36.914 09:18:59 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:36.914 09:18:59 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:36.914 09:18:59 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:36.914 09:18:59 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:36.914 09:18:59 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:36.914 09:18:59 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:36.914 09:18:59 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:36.914 09:18:59 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:36.914 09:18:59 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:36.914 09:18:59 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:36.914 09:18:59 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:04:36.914 09:18:59 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:04:36.914 09:18:59 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:36.914 09:18:59 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:36.914 09:18:59 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:36.914 09:18:59 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:04:36.914 09:18:59 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:04:36.914 09:18:59 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:36.914 09:18:59 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:36.914 09:18:59 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:36.914 09:18:59 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:04:36.914 09:18:59 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:04:36.914 09:18:59 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:36.914 09:18:59 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:36.914 09:18:59 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:36.914 09:18:59 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:36.914 09:18:59 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:36.914 09:18:59 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:36.914 09:18:59 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:36.914 09:18:59 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:36.914 No valid GPT data, bailing 00:04:36.914 09:18:59 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:36.914 09:18:59 -- scripts/common.sh@394 -- # pt= 00:04:36.914 09:18:59 -- scripts/common.sh@395 -- # return 1 00:04:36.914 09:18:59 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:36.914 1+0 records in 00:04:36.914 1+0 records out 00:04:36.914 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00460029 s, 228 MB/s 00:04:36.914 09:18:59 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:36.914 09:18:59 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:36.914 09:18:59 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:36.914 09:18:59 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:36.914 09:18:59 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:36.914 No valid GPT data, bailing 00:04:36.914 09:18:59 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:36.914 09:18:59 -- scripts/common.sh@394 -- # pt= 00:04:36.914 09:18:59 -- scripts/common.sh@395 -- # return 1 00:04:36.914 09:18:59 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:36.914 1+0 records in 00:04:36.914 1+0 records out 00:04:36.914 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00521268 s, 201 MB/s 00:04:36.914 09:18:59 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:36.914 09:18:59 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:36.914 09:18:59 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:36.914 09:18:59 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:36.914 09:18:59 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:36.914 No valid GPT data, bailing 00:04:36.914 09:18:59 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:36.914 09:18:59 -- scripts/common.sh@394 -- # pt= 00:04:36.914 09:18:59 -- scripts/common.sh@395 -- # return 1 00:04:36.914 09:18:59 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:36.914 1+0 records in 00:04:36.914 1+0 records out 00:04:36.914 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00454292 s, 231 MB/s 00:04:36.914 09:18:59 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:36.914 09:18:59 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:36.914 09:18:59 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:36.914 09:18:59 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:36.914 09:18:59 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:36.914 No valid GPT data, bailing 00:04:36.914 09:18:59 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:36.914 09:18:59 -- scripts/common.sh@394 -- # pt= 00:04:36.914 09:18:59 -- scripts/common.sh@395 -- # return 1 00:04:36.914 09:18:59 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:36.915 1+0 records in 00:04:36.915 1+0 records out 00:04:36.915 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00555794 s, 189 MB/s 00:04:36.915 09:18:59 -- spdk/autotest.sh@105 -- # sync 00:04:36.915 09:18:59 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:36.915 09:18:59 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:36.915 09:18:59 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:37.175 09:19:01 -- spdk/autotest.sh@111 -- # uname -s 00:04:37.175 09:19:01 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:37.175 09:19:01 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:37.175 09:19:01 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:37.742 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:37.742 Hugepages 00:04:37.742 node hugesize free / total 00:04:37.742 node0 1048576kB 0 / 0 00:04:37.742 node0 2048kB 0 / 0 00:04:37.742 00:04:37.742 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:38.000 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:38.000 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:38.000 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:38.000 09:19:02 -- spdk/autotest.sh@117 -- # uname -s 00:04:38.000 09:19:02 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:38.000 09:19:02 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:38.000 09:19:02 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:38.934 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:38.934 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:38.934 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:38.934 09:19:03 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:39.868 09:19:04 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:39.868 09:19:04 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:39.868 09:19:04 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:39.868 09:19:04 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:39.868 09:19:04 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:39.868 09:19:04 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:39.868 09:19:04 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:39.868 09:19:04 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:39.868 09:19:04 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:40.126 09:19:04 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:04:40.126 09:19:04 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:40.126 09:19:04 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:40.412 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:40.412 Waiting for block devices as requested 00:04:40.412 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:40.412 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:40.671 09:19:04 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:40.671 09:19:04 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:40.671 09:19:04 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:40.671 09:19:04 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:04:40.671 09:19:04 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:40.671 09:19:04 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:40.671 09:19:04 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:40.671 09:19:04 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:04:40.671 09:19:04 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:04:40.671 09:19:04 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:04:40.671 09:19:04 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:04:40.671 09:19:04 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:40.671 09:19:04 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:40.671 09:19:04 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:40.671 09:19:04 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:40.671 09:19:04 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:40.671 09:19:04 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:04:40.671 09:19:04 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:40.671 09:19:04 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:40.671 09:19:04 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:40.671 09:19:04 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:40.671 09:19:04 -- common/autotest_common.sh@1541 -- # continue 00:04:40.671 09:19:04 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:40.671 09:19:04 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:40.671 09:19:04 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:04:40.671 09:19:04 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:40.671 09:19:04 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:40.671 09:19:04 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:40.671 09:19:04 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:40.671 09:19:04 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:40.671 09:19:04 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:40.671 09:19:04 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:40.671 09:19:04 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:40.671 09:19:04 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:40.671 09:19:04 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:40.671 09:19:04 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:40.671 09:19:04 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:40.671 09:19:04 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:40.671 09:19:04 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:40.671 09:19:04 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:40.671 09:19:04 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:40.671 09:19:04 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:40.671 09:19:04 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:40.671 09:19:04 -- common/autotest_common.sh@1541 -- # continue 00:04:40.671 09:19:04 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:40.671 09:19:04 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:40.671 09:19:04 -- common/autotest_common.sh@10 -- # set +x 00:04:40.671 09:19:04 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:40.671 09:19:04 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:40.671 09:19:04 -- common/autotest_common.sh@10 -- # set +x 00:04:40.671 09:19:04 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:41.240 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:41.497 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:41.497 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:41.497 09:19:05 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:41.497 09:19:05 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:41.497 09:19:05 -- common/autotest_common.sh@10 -- # set +x 00:04:41.497 09:19:05 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:41.497 09:19:05 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:41.497 09:19:05 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:41.497 09:19:05 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:41.497 09:19:05 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:41.497 09:19:05 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:41.497 09:19:05 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:41.497 09:19:05 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:41.497 09:19:05 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:41.497 09:19:05 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:41.497 09:19:05 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:41.498 09:19:05 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:41.498 09:19:05 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:41.756 09:19:05 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:04:41.756 09:19:05 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:41.756 09:19:05 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:41.756 09:19:05 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:41.756 09:19:05 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:41.756 09:19:05 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:41.756 09:19:05 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:41.756 09:19:05 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:41.756 09:19:05 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:41.756 09:19:05 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:41.756 09:19:05 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:04:41.756 09:19:05 -- common/autotest_common.sh@1570 -- # return 0 00:04:41.756 09:19:05 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:41.756 09:19:05 -- common/autotest_common.sh@1578 -- # return 0 00:04:41.756 09:19:05 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:41.756 09:19:05 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:41.756 09:19:05 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:41.756 09:19:05 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:41.756 09:19:05 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:41.756 09:19:05 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:41.756 09:19:05 -- common/autotest_common.sh@10 -- # set +x 00:04:41.756 09:19:05 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:04:41.756 09:19:05 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:04:41.756 09:19:05 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:04:41.756 09:19:05 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:41.756 09:19:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:41.756 09:19:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:41.756 09:19:05 -- common/autotest_common.sh@10 -- # set +x 00:04:41.756 ************************************ 00:04:41.756 START TEST env 00:04:41.756 ************************************ 00:04:41.756 09:19:05 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:41.756 * Looking for test storage... 00:04:41.756 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:41.756 09:19:06 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:41.756 09:19:06 env -- common/autotest_common.sh@1691 -- # lcov --version 00:04:41.756 09:19:06 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:41.756 09:19:06 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:41.756 09:19:06 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.756 09:19:06 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.756 09:19:06 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.756 09:19:06 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.756 09:19:06 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.756 09:19:06 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.756 09:19:06 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.756 09:19:06 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.756 09:19:06 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.756 09:19:06 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.756 09:19:06 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.756 09:19:06 env -- scripts/common.sh@344 -- # case "$op" in 00:04:41.756 09:19:06 env -- scripts/common.sh@345 -- # : 1 00:04:41.756 09:19:06 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.756 09:19:06 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.756 09:19:06 env -- scripts/common.sh@365 -- # decimal 1 00:04:41.756 09:19:06 env -- scripts/common.sh@353 -- # local d=1 00:04:41.756 09:19:06 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.756 09:19:06 env -- scripts/common.sh@355 -- # echo 1 00:04:41.756 09:19:06 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.756 09:19:06 env -- scripts/common.sh@366 -- # decimal 2 00:04:41.756 09:19:06 env -- scripts/common.sh@353 -- # local d=2 00:04:41.756 09:19:06 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.756 09:19:06 env -- scripts/common.sh@355 -- # echo 2 00:04:41.756 09:19:06 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.756 09:19:06 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.756 09:19:06 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.756 09:19:06 env -- scripts/common.sh@368 -- # return 0 00:04:41.756 09:19:06 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.756 09:19:06 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:41.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.756 --rc genhtml_branch_coverage=1 00:04:41.757 --rc genhtml_function_coverage=1 00:04:41.757 --rc genhtml_legend=1 00:04:41.757 --rc geninfo_all_blocks=1 00:04:41.757 --rc geninfo_unexecuted_blocks=1 00:04:41.757 00:04:41.757 ' 00:04:41.757 09:19:06 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:41.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.757 --rc genhtml_branch_coverage=1 00:04:41.757 --rc genhtml_function_coverage=1 00:04:41.757 --rc genhtml_legend=1 00:04:41.757 --rc geninfo_all_blocks=1 00:04:41.757 --rc geninfo_unexecuted_blocks=1 00:04:41.757 00:04:41.757 ' 00:04:41.757 09:19:06 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:41.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.757 --rc genhtml_branch_coverage=1 00:04:41.757 --rc genhtml_function_coverage=1 00:04:41.757 --rc genhtml_legend=1 00:04:41.757 --rc geninfo_all_blocks=1 00:04:41.757 --rc geninfo_unexecuted_blocks=1 00:04:41.757 00:04:41.757 ' 00:04:41.757 09:19:06 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:41.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.757 --rc genhtml_branch_coverage=1 00:04:41.757 --rc genhtml_function_coverage=1 00:04:41.757 --rc genhtml_legend=1 00:04:41.757 --rc geninfo_all_blocks=1 00:04:41.757 --rc geninfo_unexecuted_blocks=1 00:04:41.757 00:04:41.757 ' 00:04:41.757 09:19:06 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:41.757 09:19:06 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:41.757 09:19:06 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:41.757 09:19:06 env -- common/autotest_common.sh@10 -- # set +x 00:04:41.757 ************************************ 00:04:41.757 START TEST env_memory 00:04:41.757 ************************************ 00:04:41.757 09:19:06 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:41.757 00:04:41.757 00:04:41.757 CUnit - A unit testing framework for C - Version 2.1-3 00:04:41.757 http://cunit.sourceforge.net/ 00:04:41.757 00:04:41.757 00:04:41.757 Suite: memory 00:04:42.015 Test: alloc and free memory map ...[2024-10-16 09:19:06.181187] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:42.016 passed 00:04:42.016 Test: mem map translation ...[2024-10-16 09:19:06.212802] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:42.016 [2024-10-16 09:19:06.212845] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:42.016 [2024-10-16 09:19:06.212907] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:42.016 [2024-10-16 09:19:06.212918] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:42.016 passed 00:04:42.016 Test: mem map registration ...[2024-10-16 09:19:06.276478] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:42.016 [2024-10-16 09:19:06.276504] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:42.016 passed 00:04:42.016 Test: mem map adjacent registrations ...passed 00:04:42.016 00:04:42.016 Run Summary: Type Total Ran Passed Failed Inactive 00:04:42.016 suites 1 1 n/a 0 0 00:04:42.016 tests 4 4 4 0 0 00:04:42.016 asserts 152 152 152 0 n/a 00:04:42.016 00:04:42.016 Elapsed time = 0.214 seconds 00:04:42.016 00:04:42.016 real 0m0.229s 00:04:42.016 user 0m0.215s 00:04:42.016 sys 0m0.010s 00:04:42.016 09:19:06 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.016 09:19:06 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:42.016 ************************************ 00:04:42.016 END TEST env_memory 00:04:42.016 ************************************ 00:04:42.016 09:19:06 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:42.016 09:19:06 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:42.016 09:19:06 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:42.016 09:19:06 env -- common/autotest_common.sh@10 -- # set +x 00:04:42.016 ************************************ 00:04:42.016 START TEST env_vtophys 00:04:42.016 ************************************ 00:04:42.016 09:19:06 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:42.274 EAL: lib.eal log level changed from notice to debug 00:04:42.274 EAL: Detected lcore 0 as core 0 on socket 0 00:04:42.274 EAL: Detected lcore 1 as core 0 on socket 0 00:04:42.275 EAL: Detected lcore 2 as core 0 on socket 0 00:04:42.275 EAL: Detected lcore 3 as core 0 on socket 0 00:04:42.275 EAL: Detected lcore 4 as core 0 on socket 0 00:04:42.275 EAL: Detected lcore 5 as core 0 on socket 0 00:04:42.275 EAL: Detected lcore 6 as core 0 on socket 0 00:04:42.275 EAL: Detected lcore 7 as core 0 on socket 0 00:04:42.275 EAL: Detected lcore 8 as core 0 on socket 0 00:04:42.275 EAL: Detected lcore 9 as core 0 on socket 0 00:04:42.275 EAL: Maximum logical cores by configuration: 128 00:04:42.275 EAL: Detected CPU lcores: 10 00:04:42.275 EAL: Detected NUMA nodes: 1 00:04:42.275 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:42.275 EAL: Detected shared linkage of DPDK 00:04:42.275 EAL: No shared files mode enabled, IPC will be disabled 00:04:42.275 EAL: Selected IOVA mode 'PA' 00:04:42.275 EAL: Probing VFIO support... 00:04:42.275 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:42.275 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:42.275 EAL: Ask a virtual area of 0x2e000 bytes 00:04:42.275 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:42.275 EAL: Setting up physically contiguous memory... 00:04:42.275 EAL: Setting maximum number of open files to 524288 00:04:42.275 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:42.275 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:42.275 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.275 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:42.275 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:42.275 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.275 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:42.275 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:42.275 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.275 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:42.275 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:42.275 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.275 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:42.275 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:42.275 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.275 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:42.275 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:42.275 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.275 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:42.275 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:42.275 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.275 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:42.275 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:42.275 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.275 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:42.275 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:42.275 EAL: Hugepages will be freed exactly as allocated. 00:04:42.275 EAL: No shared files mode enabled, IPC is disabled 00:04:42.275 EAL: No shared files mode enabled, IPC is disabled 00:04:42.275 EAL: TSC frequency is ~2200000 KHz 00:04:42.275 EAL: Main lcore 0 is ready (tid=7fe88bc6ca00;cpuset=[0]) 00:04:42.275 EAL: Trying to obtain current memory policy. 00:04:42.275 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.275 EAL: Restoring previous memory policy: 0 00:04:42.275 EAL: request: mp_malloc_sync 00:04:42.275 EAL: No shared files mode enabled, IPC is disabled 00:04:42.275 EAL: Heap on socket 0 was expanded by 2MB 00:04:42.275 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:42.275 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:42.275 EAL: Mem event callback 'spdk:(nil)' registered 00:04:42.275 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:42.275 00:04:42.275 00:04:42.275 CUnit - A unit testing framework for C - Version 2.1-3 00:04:42.275 http://cunit.sourceforge.net/ 00:04:42.275 00:04:42.275 00:04:42.275 Suite: components_suite 00:04:42.275 Test: vtophys_malloc_test ...passed 00:04:42.275 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:42.275 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.275 EAL: Restoring previous memory policy: 4 00:04:42.275 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.275 EAL: request: mp_malloc_sync 00:04:42.275 EAL: No shared files mode enabled, IPC is disabled 00:04:42.275 EAL: Heap on socket 0 was expanded by 4MB 00:04:42.275 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.275 EAL: request: mp_malloc_sync 00:04:42.275 EAL: No shared files mode enabled, IPC is disabled 00:04:42.275 EAL: Heap on socket 0 was shrunk by 4MB 00:04:42.275 EAL: Trying to obtain current memory policy. 00:04:42.275 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.275 EAL: Restoring previous memory policy: 4 00:04:42.275 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.275 EAL: request: mp_malloc_sync 00:04:42.275 EAL: No shared files mode enabled, IPC is disabled 00:04:42.275 EAL: Heap on socket 0 was expanded by 6MB 00:04:42.275 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.275 EAL: request: mp_malloc_sync 00:04:42.275 EAL: No shared files mode enabled, IPC is disabled 00:04:42.275 EAL: Heap on socket 0 was shrunk by 6MB 00:04:42.275 EAL: Trying to obtain current memory policy. 00:04:42.275 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.275 EAL: Restoring previous memory policy: 4 00:04:42.275 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.275 EAL: request: mp_malloc_sync 00:04:42.275 EAL: No shared files mode enabled, IPC is disabled 00:04:42.275 EAL: Heap on socket 0 was expanded by 10MB 00:04:42.275 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.275 EAL: request: mp_malloc_sync 00:04:42.275 EAL: No shared files mode enabled, IPC is disabled 00:04:42.275 EAL: Heap on socket 0 was shrunk by 10MB 00:04:42.275 EAL: Trying to obtain current memory policy. 00:04:42.275 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.275 EAL: Restoring previous memory policy: 4 00:04:42.275 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.275 EAL: request: mp_malloc_sync 00:04:42.275 EAL: No shared files mode enabled, IPC is disabled 00:04:42.275 EAL: Heap on socket 0 was expanded by 18MB 00:04:42.275 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.275 EAL: request: mp_malloc_sync 00:04:42.275 EAL: No shared files mode enabled, IPC is disabled 00:04:42.275 EAL: Heap on socket 0 was shrunk by 18MB 00:04:42.275 EAL: Trying to obtain current memory policy. 00:04:42.275 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.275 EAL: Restoring previous memory policy: 4 00:04:42.275 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.275 EAL: request: mp_malloc_sync 00:04:42.275 EAL: No shared files mode enabled, IPC is disabled 00:04:42.275 EAL: Heap on socket 0 was expanded by 34MB 00:04:42.275 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.275 EAL: request: mp_malloc_sync 00:04:42.275 EAL: No shared files mode enabled, IPC is disabled 00:04:42.275 EAL: Heap on socket 0 was shrunk by 34MB 00:04:42.275 EAL: Trying to obtain current memory policy. 00:04:42.275 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.275 EAL: Restoring previous memory policy: 4 00:04:42.275 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.275 EAL: request: mp_malloc_sync 00:04:42.275 EAL: No shared files mode enabled, IPC is disabled 00:04:42.275 EAL: Heap on socket 0 was expanded by 66MB 00:04:42.275 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.275 EAL: request: mp_malloc_sync 00:04:42.275 EAL: No shared files mode enabled, IPC is disabled 00:04:42.275 EAL: Heap on socket 0 was shrunk by 66MB 00:04:42.275 EAL: Trying to obtain current memory policy. 00:04:42.275 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.275 EAL: Restoring previous memory policy: 4 00:04:42.275 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.275 EAL: request: mp_malloc_sync 00:04:42.275 EAL: No shared files mode enabled, IPC is disabled 00:04:42.275 EAL: Heap on socket 0 was expanded by 130MB 00:04:42.534 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.534 EAL: request: mp_malloc_sync 00:04:42.534 EAL: No shared files mode enabled, IPC is disabled 00:04:42.534 EAL: Heap on socket 0 was shrunk by 130MB 00:04:42.534 EAL: Trying to obtain current memory policy. 00:04:42.534 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.534 EAL: Restoring previous memory policy: 4 00:04:42.534 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.534 EAL: request: mp_malloc_sync 00:04:42.534 EAL: No shared files mode enabled, IPC is disabled 00:04:42.534 EAL: Heap on socket 0 was expanded by 258MB 00:04:42.534 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.534 EAL: request: mp_malloc_sync 00:04:42.534 EAL: No shared files mode enabled, IPC is disabled 00:04:42.534 EAL: Heap on socket 0 was shrunk by 258MB 00:04:42.534 EAL: Trying to obtain current memory policy. 00:04:42.534 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.793 EAL: Restoring previous memory policy: 4 00:04:42.793 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.793 EAL: request: mp_malloc_sync 00:04:42.793 EAL: No shared files mode enabled, IPC is disabled 00:04:42.793 EAL: Heap on socket 0 was expanded by 514MB 00:04:42.793 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.793 EAL: request: mp_malloc_sync 00:04:42.793 EAL: No shared files mode enabled, IPC is disabled 00:04:42.793 EAL: Heap on socket 0 was shrunk by 514MB 00:04:42.793 EAL: Trying to obtain current memory policy. 00:04:42.793 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.051 EAL: Restoring previous memory policy: 4 00:04:43.051 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.051 EAL: request: mp_malloc_sync 00:04:43.051 EAL: No shared files mode enabled, IPC is disabled 00:04:43.051 EAL: Heap on socket 0 was expanded by 1026MB 00:04:43.309 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.568 EAL: request: mp_malloc_sync 00:04:43.568 EAL: No shared files mode enabled, IPC is disabled 00:04:43.568 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:43.568 passed 00:04:43.568 00:04:43.568 Run Summary: Type Total Ran Passed Failed Inactive 00:04:43.568 suites 1 1 n/a 0 0 00:04:43.568 tests 2 2 2 0 0 00:04:43.568 asserts 5519 5519 5519 0 n/a 00:04:43.568 00:04:43.568 Elapsed time = 1.231 seconds 00:04:43.568 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.568 EAL: request: mp_malloc_sync 00:04:43.568 EAL: No shared files mode enabled, IPC is disabled 00:04:43.568 EAL: Heap on socket 0 was shrunk by 2MB 00:04:43.568 EAL: No shared files mode enabled, IPC is disabled 00:04:43.568 EAL: No shared files mode enabled, IPC is disabled 00:04:43.568 EAL: No shared files mode enabled, IPC is disabled 00:04:43.568 00:04:43.568 real 0m1.421s 00:04:43.568 user 0m0.790s 00:04:43.568 sys 0m0.504s 00:04:43.568 ************************************ 00:04:43.568 END TEST env_vtophys 00:04:43.568 ************************************ 00:04:43.568 09:19:07 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:43.568 09:19:07 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:43.568 09:19:07 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:43.568 09:19:07 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:43.568 09:19:07 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:43.568 09:19:07 env -- common/autotest_common.sh@10 -- # set +x 00:04:43.568 ************************************ 00:04:43.568 START TEST env_pci 00:04:43.568 ************************************ 00:04:43.568 09:19:07 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:43.568 00:04:43.568 00:04:43.568 CUnit - A unit testing framework for C - Version 2.1-3 00:04:43.568 http://cunit.sourceforge.net/ 00:04:43.568 00:04:43.568 00:04:43.568 Suite: pci 00:04:43.568 Test: pci_hook ...[2024-10-16 09:19:07.907259] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56740 has claimed it 00:04:43.568 passed 00:04:43.568 00:04:43.568 Run Summary: Type Total Ran Passed Failed Inactive 00:04:43.568 suites 1 1 n/a 0 0 00:04:43.568 tests 1 1 1 0 0 00:04:43.568 asserts 25 25 25 0 n/a 00:04:43.568 00:04:43.568 Elapsed time = 0.002 seconds 00:04:43.568 EAL: Cannot find device (10000:00:01.0) 00:04:43.568 EAL: Failed to attach device on primary process 00:04:43.568 ************************************ 00:04:43.568 END TEST env_pci 00:04:43.568 ************************************ 00:04:43.568 00:04:43.568 real 0m0.023s 00:04:43.568 user 0m0.009s 00:04:43.568 sys 0m0.011s 00:04:43.568 09:19:07 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:43.568 09:19:07 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:43.568 09:19:07 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:43.568 09:19:07 env -- env/env.sh@15 -- # uname 00:04:43.568 09:19:07 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:43.568 09:19:07 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:43.568 09:19:07 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:43.568 09:19:07 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:43.568 09:19:07 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:43.569 09:19:07 env -- common/autotest_common.sh@10 -- # set +x 00:04:43.569 ************************************ 00:04:43.569 START TEST env_dpdk_post_init 00:04:43.569 ************************************ 00:04:43.827 09:19:07 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:43.827 EAL: Detected CPU lcores: 10 00:04:43.827 EAL: Detected NUMA nodes: 1 00:04:43.827 EAL: Detected shared linkage of DPDK 00:04:43.827 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:43.827 EAL: Selected IOVA mode 'PA' 00:04:43.827 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:43.827 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:43.827 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:43.827 Starting DPDK initialization... 00:04:43.827 Starting SPDK post initialization... 00:04:43.827 SPDK NVMe probe 00:04:43.827 Attaching to 0000:00:10.0 00:04:43.827 Attaching to 0000:00:11.0 00:04:43.827 Attached to 0000:00:10.0 00:04:43.827 Attached to 0000:00:11.0 00:04:43.827 Cleaning up... 00:04:43.827 00:04:43.827 real 0m0.181s 00:04:43.827 user 0m0.040s 00:04:43.827 sys 0m0.039s 00:04:43.827 09:19:08 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:43.827 09:19:08 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:43.827 ************************************ 00:04:43.827 END TEST env_dpdk_post_init 00:04:43.827 ************************************ 00:04:43.827 09:19:08 env -- env/env.sh@26 -- # uname 00:04:43.827 09:19:08 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:43.827 09:19:08 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:43.827 09:19:08 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:43.827 09:19:08 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:43.827 09:19:08 env -- common/autotest_common.sh@10 -- # set +x 00:04:43.827 ************************************ 00:04:43.827 START TEST env_mem_callbacks 00:04:43.827 ************************************ 00:04:43.827 09:19:08 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:43.827 EAL: Detected CPU lcores: 10 00:04:43.827 EAL: Detected NUMA nodes: 1 00:04:43.827 EAL: Detected shared linkage of DPDK 00:04:44.096 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:44.096 EAL: Selected IOVA mode 'PA' 00:04:44.096 00:04:44.096 00:04:44.096 CUnit - A unit testing framework for C - Version 2.1-3 00:04:44.096 http://cunit.sourceforge.net/ 00:04:44.096 00:04:44.096 00:04:44.096 Suite: memory 00:04:44.096 Test: test ... 00:04:44.096 register 0x200000200000 2097152 00:04:44.096 malloc 3145728 00:04:44.096 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:44.096 register 0x200000400000 4194304 00:04:44.096 buf 0x200000500000 len 3145728 PASSED 00:04:44.096 malloc 64 00:04:44.096 buf 0x2000004fff40 len 64 PASSED 00:04:44.096 malloc 4194304 00:04:44.096 register 0x200000800000 6291456 00:04:44.096 buf 0x200000a00000 len 4194304 PASSED 00:04:44.096 free 0x200000500000 3145728 00:04:44.096 free 0x2000004fff40 64 00:04:44.096 unregister 0x200000400000 4194304 PASSED 00:04:44.096 free 0x200000a00000 4194304 00:04:44.096 unregister 0x200000800000 6291456 PASSED 00:04:44.096 malloc 8388608 00:04:44.096 register 0x200000400000 10485760 00:04:44.096 buf 0x200000600000 len 8388608 PASSED 00:04:44.096 free 0x200000600000 8388608 00:04:44.096 unregister 0x200000400000 10485760 PASSED 00:04:44.096 passed 00:04:44.096 00:04:44.096 Run Summary: Type Total Ran Passed Failed Inactive 00:04:44.096 suites 1 1 n/a 0 0 00:04:44.096 tests 1 1 1 0 0 00:04:44.096 asserts 15 15 15 0 n/a 00:04:44.096 00:04:44.096 Elapsed time = 0.005 seconds 00:04:44.096 ************************************ 00:04:44.096 END TEST env_mem_callbacks 00:04:44.096 ************************************ 00:04:44.096 00:04:44.096 real 0m0.141s 00:04:44.096 user 0m0.020s 00:04:44.096 sys 0m0.019s 00:04:44.096 09:19:08 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:44.096 09:19:08 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:44.096 ************************************ 00:04:44.096 END TEST env 00:04:44.096 ************************************ 00:04:44.096 00:04:44.096 real 0m2.446s 00:04:44.096 user 0m1.286s 00:04:44.096 sys 0m0.807s 00:04:44.096 09:19:08 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:44.096 09:19:08 env -- common/autotest_common.sh@10 -- # set +x 00:04:44.096 09:19:08 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:44.096 09:19:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:44.096 09:19:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:44.097 09:19:08 -- common/autotest_common.sh@10 -- # set +x 00:04:44.097 ************************************ 00:04:44.097 START TEST rpc 00:04:44.097 ************************************ 00:04:44.097 09:19:08 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:44.354 * Looking for test storage... 00:04:44.354 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:44.354 09:19:08 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:44.354 09:19:08 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:44.354 09:19:08 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:44.354 09:19:08 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:44.354 09:19:08 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:44.354 09:19:08 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:44.354 09:19:08 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:44.354 09:19:08 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:44.354 09:19:08 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:44.354 09:19:08 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:44.354 09:19:08 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:44.354 09:19:08 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:44.354 09:19:08 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:44.354 09:19:08 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:44.354 09:19:08 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:44.354 09:19:08 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:44.354 09:19:08 rpc -- scripts/common.sh@345 -- # : 1 00:04:44.354 09:19:08 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:44.354 09:19:08 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.354 09:19:08 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:44.354 09:19:08 rpc -- scripts/common.sh@353 -- # local d=1 00:04:44.354 09:19:08 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:44.354 09:19:08 rpc -- scripts/common.sh@355 -- # echo 1 00:04:44.354 09:19:08 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:44.354 09:19:08 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:44.354 09:19:08 rpc -- scripts/common.sh@353 -- # local d=2 00:04:44.354 09:19:08 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:44.354 09:19:08 rpc -- scripts/common.sh@355 -- # echo 2 00:04:44.354 09:19:08 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:44.354 09:19:08 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:44.354 09:19:08 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:44.354 09:19:08 rpc -- scripts/common.sh@368 -- # return 0 00:04:44.354 09:19:08 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:44.354 09:19:08 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:44.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.354 --rc genhtml_branch_coverage=1 00:04:44.354 --rc genhtml_function_coverage=1 00:04:44.354 --rc genhtml_legend=1 00:04:44.354 --rc geninfo_all_blocks=1 00:04:44.354 --rc geninfo_unexecuted_blocks=1 00:04:44.354 00:04:44.354 ' 00:04:44.354 09:19:08 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:44.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.354 --rc genhtml_branch_coverage=1 00:04:44.354 --rc genhtml_function_coverage=1 00:04:44.354 --rc genhtml_legend=1 00:04:44.354 --rc geninfo_all_blocks=1 00:04:44.354 --rc geninfo_unexecuted_blocks=1 00:04:44.354 00:04:44.354 ' 00:04:44.354 09:19:08 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:44.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.354 --rc genhtml_branch_coverage=1 00:04:44.354 --rc genhtml_function_coverage=1 00:04:44.354 --rc genhtml_legend=1 00:04:44.354 --rc geninfo_all_blocks=1 00:04:44.354 --rc geninfo_unexecuted_blocks=1 00:04:44.354 00:04:44.354 ' 00:04:44.354 09:19:08 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:44.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.354 --rc genhtml_branch_coverage=1 00:04:44.354 --rc genhtml_function_coverage=1 00:04:44.354 --rc genhtml_legend=1 00:04:44.354 --rc geninfo_all_blocks=1 00:04:44.354 --rc geninfo_unexecuted_blocks=1 00:04:44.354 00:04:44.354 ' 00:04:44.354 09:19:08 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56857 00:04:44.354 09:19:08 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:44.354 09:19:08 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:44.354 09:19:08 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56857 00:04:44.354 09:19:08 rpc -- common/autotest_common.sh@831 -- # '[' -z 56857 ']' 00:04:44.354 09:19:08 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.354 09:19:08 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:44.354 09:19:08 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.354 09:19:08 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:44.354 09:19:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.354 [2024-10-16 09:19:08.690818] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:04:44.354 [2024-10-16 09:19:08.691381] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56857 ] 00:04:44.612 [2024-10-16 09:19:08.826276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.612 [2024-10-16 09:19:08.870460] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:44.612 [2024-10-16 09:19:08.870514] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56857' to capture a snapshot of events at runtime. 00:04:44.612 [2024-10-16 09:19:08.870541] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:44.612 [2024-10-16 09:19:08.870550] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:44.612 [2024-10-16 09:19:08.870569] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56857 for offline analysis/debug. 00:04:44.612 [2024-10-16 09:19:08.871029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.612 [2024-10-16 09:19:08.939548] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:44.870 09:19:09 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:44.870 09:19:09 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:44.870 09:19:09 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:44.871 09:19:09 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:44.871 09:19:09 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:44.871 09:19:09 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:44.871 09:19:09 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:44.871 09:19:09 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:44.871 09:19:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.871 ************************************ 00:04:44.871 START TEST rpc_integrity 00:04:44.871 ************************************ 00:04:44.871 09:19:09 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:44.871 09:19:09 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:44.871 09:19:09 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.871 09:19:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.871 09:19:09 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.871 09:19:09 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:44.871 09:19:09 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:44.871 09:19:09 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:44.871 09:19:09 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:44.871 09:19:09 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.871 09:19:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.871 09:19:09 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.871 09:19:09 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:44.871 09:19:09 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:44.871 09:19:09 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.871 09:19:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.871 09:19:09 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.871 09:19:09 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:44.871 { 00:04:44.871 "name": "Malloc0", 00:04:44.871 "aliases": [ 00:04:44.871 "1d4fc540-954b-4f0c-9360-d278d1120d60" 00:04:44.871 ], 00:04:44.871 "product_name": "Malloc disk", 00:04:44.871 "block_size": 512, 00:04:44.871 "num_blocks": 16384, 00:04:44.871 "uuid": "1d4fc540-954b-4f0c-9360-d278d1120d60", 00:04:44.871 "assigned_rate_limits": { 00:04:44.871 "rw_ios_per_sec": 0, 00:04:44.871 "rw_mbytes_per_sec": 0, 00:04:44.871 "r_mbytes_per_sec": 0, 00:04:44.871 "w_mbytes_per_sec": 0 00:04:44.871 }, 00:04:44.871 "claimed": false, 00:04:44.871 "zoned": false, 00:04:44.871 "supported_io_types": { 00:04:44.871 "read": true, 00:04:44.871 "write": true, 00:04:44.871 "unmap": true, 00:04:44.871 "flush": true, 00:04:44.871 "reset": true, 00:04:44.871 "nvme_admin": false, 00:04:44.871 "nvme_io": false, 00:04:44.871 "nvme_io_md": false, 00:04:44.871 "write_zeroes": true, 00:04:44.871 "zcopy": true, 00:04:44.871 "get_zone_info": false, 00:04:44.871 "zone_management": false, 00:04:44.871 "zone_append": false, 00:04:44.871 "compare": false, 00:04:44.871 "compare_and_write": false, 00:04:44.871 "abort": true, 00:04:44.871 "seek_hole": false, 00:04:44.871 "seek_data": false, 00:04:44.871 "copy": true, 00:04:44.871 "nvme_iov_md": false 00:04:44.871 }, 00:04:44.871 "memory_domains": [ 00:04:44.871 { 00:04:44.871 "dma_device_id": "system", 00:04:44.871 "dma_device_type": 1 00:04:44.871 }, 00:04:44.871 { 00:04:44.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.871 "dma_device_type": 2 00:04:44.871 } 00:04:44.871 ], 00:04:44.871 "driver_specific": {} 00:04:44.871 } 00:04:44.871 ]' 00:04:44.871 09:19:09 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:45.129 09:19:09 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:45.129 09:19:09 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:45.129 09:19:09 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.129 09:19:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.129 [2024-10-16 09:19:09.303329] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:45.129 [2024-10-16 09:19:09.303389] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:45.129 [2024-10-16 09:19:09.303421] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xada120 00:04:45.129 [2024-10-16 09:19:09.303436] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:45.129 [2024-10-16 09:19:09.304929] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:45.129 [2024-10-16 09:19:09.304966] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:45.129 Passthru0 00:04:45.129 09:19:09 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.129 09:19:09 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:45.129 09:19:09 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.129 09:19:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.129 09:19:09 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.129 09:19:09 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:45.129 { 00:04:45.129 "name": "Malloc0", 00:04:45.129 "aliases": [ 00:04:45.129 "1d4fc540-954b-4f0c-9360-d278d1120d60" 00:04:45.129 ], 00:04:45.129 "product_name": "Malloc disk", 00:04:45.129 "block_size": 512, 00:04:45.129 "num_blocks": 16384, 00:04:45.129 "uuid": "1d4fc540-954b-4f0c-9360-d278d1120d60", 00:04:45.129 "assigned_rate_limits": { 00:04:45.129 "rw_ios_per_sec": 0, 00:04:45.129 "rw_mbytes_per_sec": 0, 00:04:45.129 "r_mbytes_per_sec": 0, 00:04:45.129 "w_mbytes_per_sec": 0 00:04:45.129 }, 00:04:45.129 "claimed": true, 00:04:45.129 "claim_type": "exclusive_write", 00:04:45.129 "zoned": false, 00:04:45.129 "supported_io_types": { 00:04:45.129 "read": true, 00:04:45.129 "write": true, 00:04:45.129 "unmap": true, 00:04:45.129 "flush": true, 00:04:45.129 "reset": true, 00:04:45.129 "nvme_admin": false, 00:04:45.129 "nvme_io": false, 00:04:45.129 "nvme_io_md": false, 00:04:45.129 "write_zeroes": true, 00:04:45.129 "zcopy": true, 00:04:45.129 "get_zone_info": false, 00:04:45.129 "zone_management": false, 00:04:45.129 "zone_append": false, 00:04:45.129 "compare": false, 00:04:45.129 "compare_and_write": false, 00:04:45.129 "abort": true, 00:04:45.129 "seek_hole": false, 00:04:45.129 "seek_data": false, 00:04:45.129 "copy": true, 00:04:45.129 "nvme_iov_md": false 00:04:45.129 }, 00:04:45.129 "memory_domains": [ 00:04:45.129 { 00:04:45.129 "dma_device_id": "system", 00:04:45.129 "dma_device_type": 1 00:04:45.129 }, 00:04:45.129 { 00:04:45.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:45.129 "dma_device_type": 2 00:04:45.129 } 00:04:45.129 ], 00:04:45.129 "driver_specific": {} 00:04:45.129 }, 00:04:45.129 { 00:04:45.129 "name": "Passthru0", 00:04:45.129 "aliases": [ 00:04:45.129 "84c1f5c4-c39e-5946-b85e-4a6acd6d8528" 00:04:45.129 ], 00:04:45.129 "product_name": "passthru", 00:04:45.129 "block_size": 512, 00:04:45.129 "num_blocks": 16384, 00:04:45.129 "uuid": "84c1f5c4-c39e-5946-b85e-4a6acd6d8528", 00:04:45.129 "assigned_rate_limits": { 00:04:45.129 "rw_ios_per_sec": 0, 00:04:45.129 "rw_mbytes_per_sec": 0, 00:04:45.129 "r_mbytes_per_sec": 0, 00:04:45.129 "w_mbytes_per_sec": 0 00:04:45.129 }, 00:04:45.129 "claimed": false, 00:04:45.129 "zoned": false, 00:04:45.129 "supported_io_types": { 00:04:45.129 "read": true, 00:04:45.129 "write": true, 00:04:45.129 "unmap": true, 00:04:45.129 "flush": true, 00:04:45.129 "reset": true, 00:04:45.129 "nvme_admin": false, 00:04:45.129 "nvme_io": false, 00:04:45.129 "nvme_io_md": false, 00:04:45.129 "write_zeroes": true, 00:04:45.129 "zcopy": true, 00:04:45.129 "get_zone_info": false, 00:04:45.129 "zone_management": false, 00:04:45.129 "zone_append": false, 00:04:45.129 "compare": false, 00:04:45.129 "compare_and_write": false, 00:04:45.129 "abort": true, 00:04:45.129 "seek_hole": false, 00:04:45.129 "seek_data": false, 00:04:45.129 "copy": true, 00:04:45.129 "nvme_iov_md": false 00:04:45.129 }, 00:04:45.129 "memory_domains": [ 00:04:45.129 { 00:04:45.129 "dma_device_id": "system", 00:04:45.129 "dma_device_type": 1 00:04:45.129 }, 00:04:45.129 { 00:04:45.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:45.129 "dma_device_type": 2 00:04:45.129 } 00:04:45.129 ], 00:04:45.129 "driver_specific": { 00:04:45.129 "passthru": { 00:04:45.129 "name": "Passthru0", 00:04:45.129 "base_bdev_name": "Malloc0" 00:04:45.129 } 00:04:45.129 } 00:04:45.129 } 00:04:45.129 ]' 00:04:45.129 09:19:09 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:45.129 09:19:09 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:45.129 09:19:09 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:45.129 09:19:09 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.129 09:19:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.129 09:19:09 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.129 09:19:09 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:45.129 09:19:09 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.129 09:19:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.129 09:19:09 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.129 09:19:09 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:45.129 09:19:09 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.129 09:19:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.129 09:19:09 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.129 09:19:09 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:45.129 09:19:09 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:45.129 ************************************ 00:04:45.129 END TEST rpc_integrity 00:04:45.129 ************************************ 00:04:45.129 09:19:09 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:45.129 00:04:45.129 real 0m0.330s 00:04:45.129 user 0m0.230s 00:04:45.129 sys 0m0.033s 00:04:45.129 09:19:09 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:45.129 09:19:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.129 09:19:09 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:45.129 09:19:09 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:45.129 09:19:09 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:45.129 09:19:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.129 ************************************ 00:04:45.129 START TEST rpc_plugins 00:04:45.129 ************************************ 00:04:45.129 09:19:09 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:45.129 09:19:09 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:45.129 09:19:09 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.129 09:19:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:45.388 09:19:09 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.388 09:19:09 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:45.388 09:19:09 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:45.388 09:19:09 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.388 09:19:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:45.388 09:19:09 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.388 09:19:09 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:45.388 { 00:04:45.388 "name": "Malloc1", 00:04:45.388 "aliases": [ 00:04:45.388 "813afa2a-4dd3-43cf-b22e-dc26f79d2191" 00:04:45.388 ], 00:04:45.388 "product_name": "Malloc disk", 00:04:45.388 "block_size": 4096, 00:04:45.388 "num_blocks": 256, 00:04:45.388 "uuid": "813afa2a-4dd3-43cf-b22e-dc26f79d2191", 00:04:45.388 "assigned_rate_limits": { 00:04:45.388 "rw_ios_per_sec": 0, 00:04:45.388 "rw_mbytes_per_sec": 0, 00:04:45.388 "r_mbytes_per_sec": 0, 00:04:45.388 "w_mbytes_per_sec": 0 00:04:45.388 }, 00:04:45.388 "claimed": false, 00:04:45.388 "zoned": false, 00:04:45.388 "supported_io_types": { 00:04:45.388 "read": true, 00:04:45.388 "write": true, 00:04:45.388 "unmap": true, 00:04:45.388 "flush": true, 00:04:45.388 "reset": true, 00:04:45.388 "nvme_admin": false, 00:04:45.388 "nvme_io": false, 00:04:45.388 "nvme_io_md": false, 00:04:45.388 "write_zeroes": true, 00:04:45.388 "zcopy": true, 00:04:45.388 "get_zone_info": false, 00:04:45.388 "zone_management": false, 00:04:45.388 "zone_append": false, 00:04:45.388 "compare": false, 00:04:45.388 "compare_and_write": false, 00:04:45.388 "abort": true, 00:04:45.388 "seek_hole": false, 00:04:45.388 "seek_data": false, 00:04:45.388 "copy": true, 00:04:45.388 "nvme_iov_md": false 00:04:45.388 }, 00:04:45.388 "memory_domains": [ 00:04:45.388 { 00:04:45.388 "dma_device_id": "system", 00:04:45.388 "dma_device_type": 1 00:04:45.388 }, 00:04:45.388 { 00:04:45.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:45.388 "dma_device_type": 2 00:04:45.388 } 00:04:45.388 ], 00:04:45.388 "driver_specific": {} 00:04:45.388 } 00:04:45.388 ]' 00:04:45.388 09:19:09 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:45.388 09:19:09 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:45.388 09:19:09 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:45.388 09:19:09 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.388 09:19:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:45.388 09:19:09 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.388 09:19:09 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:45.388 09:19:09 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.388 09:19:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:45.388 09:19:09 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.388 09:19:09 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:45.388 09:19:09 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:45.388 ************************************ 00:04:45.388 END TEST rpc_plugins 00:04:45.388 ************************************ 00:04:45.388 09:19:09 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:45.388 00:04:45.388 real 0m0.161s 00:04:45.388 user 0m0.104s 00:04:45.388 sys 0m0.024s 00:04:45.388 09:19:09 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:45.388 09:19:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:45.388 09:19:09 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:45.388 09:19:09 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:45.388 09:19:09 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:45.388 09:19:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.388 ************************************ 00:04:45.388 START TEST rpc_trace_cmd_test 00:04:45.388 ************************************ 00:04:45.388 09:19:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:45.388 09:19:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:45.388 09:19:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:45.388 09:19:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.388 09:19:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:45.388 09:19:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.388 09:19:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:45.388 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56857", 00:04:45.388 "tpoint_group_mask": "0x8", 00:04:45.388 "iscsi_conn": { 00:04:45.388 "mask": "0x2", 00:04:45.388 "tpoint_mask": "0x0" 00:04:45.388 }, 00:04:45.388 "scsi": { 00:04:45.388 "mask": "0x4", 00:04:45.388 "tpoint_mask": "0x0" 00:04:45.388 }, 00:04:45.388 "bdev": { 00:04:45.388 "mask": "0x8", 00:04:45.388 "tpoint_mask": "0xffffffffffffffff" 00:04:45.388 }, 00:04:45.388 "nvmf_rdma": { 00:04:45.388 "mask": "0x10", 00:04:45.388 "tpoint_mask": "0x0" 00:04:45.388 }, 00:04:45.388 "nvmf_tcp": { 00:04:45.388 "mask": "0x20", 00:04:45.388 "tpoint_mask": "0x0" 00:04:45.388 }, 00:04:45.388 "ftl": { 00:04:45.388 "mask": "0x40", 00:04:45.388 "tpoint_mask": "0x0" 00:04:45.388 }, 00:04:45.388 "blobfs": { 00:04:45.388 "mask": "0x80", 00:04:45.388 "tpoint_mask": "0x0" 00:04:45.388 }, 00:04:45.388 "dsa": { 00:04:45.388 "mask": "0x200", 00:04:45.388 "tpoint_mask": "0x0" 00:04:45.388 }, 00:04:45.388 "thread": { 00:04:45.388 "mask": "0x400", 00:04:45.388 "tpoint_mask": "0x0" 00:04:45.388 }, 00:04:45.388 "nvme_pcie": { 00:04:45.388 "mask": "0x800", 00:04:45.388 "tpoint_mask": "0x0" 00:04:45.388 }, 00:04:45.388 "iaa": { 00:04:45.388 "mask": "0x1000", 00:04:45.388 "tpoint_mask": "0x0" 00:04:45.388 }, 00:04:45.388 "nvme_tcp": { 00:04:45.388 "mask": "0x2000", 00:04:45.388 "tpoint_mask": "0x0" 00:04:45.388 }, 00:04:45.388 "bdev_nvme": { 00:04:45.388 "mask": "0x4000", 00:04:45.388 "tpoint_mask": "0x0" 00:04:45.388 }, 00:04:45.388 "sock": { 00:04:45.388 "mask": "0x8000", 00:04:45.388 "tpoint_mask": "0x0" 00:04:45.388 }, 00:04:45.388 "blob": { 00:04:45.388 "mask": "0x10000", 00:04:45.388 "tpoint_mask": "0x0" 00:04:45.388 }, 00:04:45.388 "bdev_raid": { 00:04:45.388 "mask": "0x20000", 00:04:45.388 "tpoint_mask": "0x0" 00:04:45.388 }, 00:04:45.388 "scheduler": { 00:04:45.388 "mask": "0x40000", 00:04:45.388 "tpoint_mask": "0x0" 00:04:45.388 } 00:04:45.388 }' 00:04:45.388 09:19:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:45.646 09:19:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:45.646 09:19:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:45.646 09:19:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:45.646 09:19:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:45.646 09:19:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:45.646 09:19:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:45.647 09:19:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:45.647 09:19:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:45.647 ************************************ 00:04:45.647 END TEST rpc_trace_cmd_test 00:04:45.647 ************************************ 00:04:45.647 09:19:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:45.647 00:04:45.647 real 0m0.285s 00:04:45.647 user 0m0.250s 00:04:45.647 sys 0m0.023s 00:04:45.647 09:19:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:45.647 09:19:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:45.905 09:19:10 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:45.905 09:19:10 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:45.905 09:19:10 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:45.905 09:19:10 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:45.905 09:19:10 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:45.905 09:19:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.905 ************************************ 00:04:45.905 START TEST rpc_daemon_integrity 00:04:45.905 ************************************ 00:04:45.905 09:19:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:45.905 09:19:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:45.905 09:19:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.905 09:19:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.905 09:19:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.905 09:19:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:45.905 09:19:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:45.905 09:19:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:45.905 09:19:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:45.905 09:19:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.905 09:19:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.905 09:19:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.905 09:19:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:45.905 09:19:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:45.905 09:19:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.905 09:19:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.905 09:19:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.905 09:19:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:45.905 { 00:04:45.905 "name": "Malloc2", 00:04:45.905 "aliases": [ 00:04:45.905 "d60319fa-7ded-480a-b2ec-37d8fa387733" 00:04:45.905 ], 00:04:45.905 "product_name": "Malloc disk", 00:04:45.905 "block_size": 512, 00:04:45.905 "num_blocks": 16384, 00:04:45.905 "uuid": "d60319fa-7ded-480a-b2ec-37d8fa387733", 00:04:45.905 "assigned_rate_limits": { 00:04:45.905 "rw_ios_per_sec": 0, 00:04:45.905 "rw_mbytes_per_sec": 0, 00:04:45.905 "r_mbytes_per_sec": 0, 00:04:45.905 "w_mbytes_per_sec": 0 00:04:45.905 }, 00:04:45.905 "claimed": false, 00:04:45.905 "zoned": false, 00:04:45.905 "supported_io_types": { 00:04:45.905 "read": true, 00:04:45.905 "write": true, 00:04:45.905 "unmap": true, 00:04:45.905 "flush": true, 00:04:45.905 "reset": true, 00:04:45.905 "nvme_admin": false, 00:04:45.905 "nvme_io": false, 00:04:45.905 "nvme_io_md": false, 00:04:45.905 "write_zeroes": true, 00:04:45.905 "zcopy": true, 00:04:45.905 "get_zone_info": false, 00:04:45.905 "zone_management": false, 00:04:45.905 "zone_append": false, 00:04:45.905 "compare": false, 00:04:45.905 "compare_and_write": false, 00:04:45.905 "abort": true, 00:04:45.905 "seek_hole": false, 00:04:45.905 "seek_data": false, 00:04:45.905 "copy": true, 00:04:45.905 "nvme_iov_md": false 00:04:45.905 }, 00:04:45.905 "memory_domains": [ 00:04:45.905 { 00:04:45.905 "dma_device_id": "system", 00:04:45.905 "dma_device_type": 1 00:04:45.905 }, 00:04:45.905 { 00:04:45.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:45.905 "dma_device_type": 2 00:04:45.905 } 00:04:45.905 ], 00:04:45.905 "driver_specific": {} 00:04:45.905 } 00:04:45.905 ]' 00:04:45.905 09:19:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:45.905 09:19:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:45.905 09:19:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:45.905 09:19:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.905 09:19:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.905 [2024-10-16 09:19:10.223738] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:45.905 [2024-10-16 09:19:10.223783] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:45.905 [2024-10-16 09:19:10.223802] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xae8a80 00:04:45.905 [2024-10-16 09:19:10.223811] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:45.905 [2024-10-16 09:19:10.225671] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:45.905 [2024-10-16 09:19:10.225706] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:45.905 Passthru0 00:04:45.905 09:19:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.905 09:19:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:45.905 09:19:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.905 09:19:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.905 09:19:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.905 09:19:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:45.905 { 00:04:45.905 "name": "Malloc2", 00:04:45.905 "aliases": [ 00:04:45.905 "d60319fa-7ded-480a-b2ec-37d8fa387733" 00:04:45.905 ], 00:04:45.905 "product_name": "Malloc disk", 00:04:45.905 "block_size": 512, 00:04:45.905 "num_blocks": 16384, 00:04:45.905 "uuid": "d60319fa-7ded-480a-b2ec-37d8fa387733", 00:04:45.905 "assigned_rate_limits": { 00:04:45.905 "rw_ios_per_sec": 0, 00:04:45.905 "rw_mbytes_per_sec": 0, 00:04:45.905 "r_mbytes_per_sec": 0, 00:04:45.905 "w_mbytes_per_sec": 0 00:04:45.905 }, 00:04:45.905 "claimed": true, 00:04:45.905 "claim_type": "exclusive_write", 00:04:45.905 "zoned": false, 00:04:45.905 "supported_io_types": { 00:04:45.905 "read": true, 00:04:45.905 "write": true, 00:04:45.905 "unmap": true, 00:04:45.905 "flush": true, 00:04:45.905 "reset": true, 00:04:45.905 "nvme_admin": false, 00:04:45.905 "nvme_io": false, 00:04:45.905 "nvme_io_md": false, 00:04:45.905 "write_zeroes": true, 00:04:45.905 "zcopy": true, 00:04:45.905 "get_zone_info": false, 00:04:45.905 "zone_management": false, 00:04:45.905 "zone_append": false, 00:04:45.905 "compare": false, 00:04:45.905 "compare_and_write": false, 00:04:45.905 "abort": true, 00:04:45.905 "seek_hole": false, 00:04:45.905 "seek_data": false, 00:04:45.905 "copy": true, 00:04:45.905 "nvme_iov_md": false 00:04:45.905 }, 00:04:45.905 "memory_domains": [ 00:04:45.905 { 00:04:45.905 "dma_device_id": "system", 00:04:45.905 "dma_device_type": 1 00:04:45.905 }, 00:04:45.905 { 00:04:45.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:45.905 "dma_device_type": 2 00:04:45.905 } 00:04:45.905 ], 00:04:45.905 "driver_specific": {} 00:04:45.905 }, 00:04:45.905 { 00:04:45.905 "name": "Passthru0", 00:04:45.905 "aliases": [ 00:04:45.905 "84476fa8-ecd5-5f2d-a7ab-b9f35df40ed9" 00:04:45.905 ], 00:04:45.905 "product_name": "passthru", 00:04:45.905 "block_size": 512, 00:04:45.905 "num_blocks": 16384, 00:04:45.905 "uuid": "84476fa8-ecd5-5f2d-a7ab-b9f35df40ed9", 00:04:45.905 "assigned_rate_limits": { 00:04:45.906 "rw_ios_per_sec": 0, 00:04:45.906 "rw_mbytes_per_sec": 0, 00:04:45.906 "r_mbytes_per_sec": 0, 00:04:45.906 "w_mbytes_per_sec": 0 00:04:45.906 }, 00:04:45.906 "claimed": false, 00:04:45.906 "zoned": false, 00:04:45.906 "supported_io_types": { 00:04:45.906 "read": true, 00:04:45.906 "write": true, 00:04:45.906 "unmap": true, 00:04:45.906 "flush": true, 00:04:45.906 "reset": true, 00:04:45.906 "nvme_admin": false, 00:04:45.906 "nvme_io": false, 00:04:45.906 "nvme_io_md": false, 00:04:45.906 "write_zeroes": true, 00:04:45.906 "zcopy": true, 00:04:45.906 "get_zone_info": false, 00:04:45.906 "zone_management": false, 00:04:45.906 "zone_append": false, 00:04:45.906 "compare": false, 00:04:45.906 "compare_and_write": false, 00:04:45.906 "abort": true, 00:04:45.906 "seek_hole": false, 00:04:45.906 "seek_data": false, 00:04:45.906 "copy": true, 00:04:45.906 "nvme_iov_md": false 00:04:45.906 }, 00:04:45.906 "memory_domains": [ 00:04:45.906 { 00:04:45.906 "dma_device_id": "system", 00:04:45.906 "dma_device_type": 1 00:04:45.906 }, 00:04:45.906 { 00:04:45.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:45.906 "dma_device_type": 2 00:04:45.906 } 00:04:45.906 ], 00:04:45.906 "driver_specific": { 00:04:45.906 "passthru": { 00:04:45.906 "name": "Passthru0", 00:04:45.906 "base_bdev_name": "Malloc2" 00:04:45.906 } 00:04:45.906 } 00:04:45.906 } 00:04:45.906 ]' 00:04:45.906 09:19:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:45.906 09:19:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:45.906 09:19:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:45.906 09:19:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.906 09:19:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.187 09:19:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.187 09:19:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:46.187 09:19:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.187 09:19:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.187 09:19:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.187 09:19:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:46.187 09:19:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.187 09:19:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.187 09:19:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.187 09:19:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:46.187 09:19:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:46.187 ************************************ 00:04:46.187 END TEST rpc_daemon_integrity 00:04:46.187 ************************************ 00:04:46.187 09:19:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:46.187 00:04:46.187 real 0m0.320s 00:04:46.187 user 0m0.212s 00:04:46.187 sys 0m0.043s 00:04:46.187 09:19:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:46.187 09:19:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.187 09:19:10 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:46.187 09:19:10 rpc -- rpc/rpc.sh@84 -- # killprocess 56857 00:04:46.187 09:19:10 rpc -- common/autotest_common.sh@950 -- # '[' -z 56857 ']' 00:04:46.187 09:19:10 rpc -- common/autotest_common.sh@954 -- # kill -0 56857 00:04:46.187 09:19:10 rpc -- common/autotest_common.sh@955 -- # uname 00:04:46.187 09:19:10 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:46.187 09:19:10 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 56857 00:04:46.187 killing process with pid 56857 00:04:46.187 09:19:10 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:46.187 09:19:10 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:46.187 09:19:10 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 56857' 00:04:46.187 09:19:10 rpc -- common/autotest_common.sh@969 -- # kill 56857 00:04:46.187 09:19:10 rpc -- common/autotest_common.sh@974 -- # wait 56857 00:04:46.459 00:04:46.459 real 0m2.398s 00:04:46.459 user 0m3.106s 00:04:46.459 sys 0m0.626s 00:04:46.459 ************************************ 00:04:46.459 END TEST rpc 00:04:46.459 ************************************ 00:04:46.459 09:19:10 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:46.459 09:19:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.717 09:19:10 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:46.717 09:19:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:46.717 09:19:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:46.717 09:19:10 -- common/autotest_common.sh@10 -- # set +x 00:04:46.717 ************************************ 00:04:46.717 START TEST skip_rpc 00:04:46.717 ************************************ 00:04:46.717 09:19:10 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:46.717 * Looking for test storage... 00:04:46.717 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:46.717 09:19:10 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:46.717 09:19:10 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:46.717 09:19:10 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:46.717 09:19:11 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:46.717 09:19:11 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:46.717 09:19:11 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:46.717 09:19:11 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:46.717 09:19:11 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:46.717 09:19:11 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:46.717 09:19:11 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:46.717 09:19:11 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:46.717 09:19:11 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:46.717 09:19:11 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:46.717 09:19:11 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:46.717 09:19:11 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:46.717 09:19:11 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:46.717 09:19:11 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:46.717 09:19:11 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:46.717 09:19:11 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:46.717 09:19:11 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:46.717 09:19:11 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:46.718 09:19:11 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:46.718 09:19:11 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:46.718 09:19:11 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:46.718 09:19:11 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:46.718 09:19:11 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:46.718 09:19:11 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:46.718 09:19:11 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:46.718 09:19:11 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:46.718 09:19:11 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:46.718 09:19:11 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:46.718 09:19:11 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:46.718 09:19:11 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:46.718 09:19:11 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:46.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.718 --rc genhtml_branch_coverage=1 00:04:46.718 --rc genhtml_function_coverage=1 00:04:46.718 --rc genhtml_legend=1 00:04:46.718 --rc geninfo_all_blocks=1 00:04:46.718 --rc geninfo_unexecuted_blocks=1 00:04:46.718 00:04:46.718 ' 00:04:46.718 09:19:11 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:46.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.718 --rc genhtml_branch_coverage=1 00:04:46.718 --rc genhtml_function_coverage=1 00:04:46.718 --rc genhtml_legend=1 00:04:46.718 --rc geninfo_all_blocks=1 00:04:46.718 --rc geninfo_unexecuted_blocks=1 00:04:46.718 00:04:46.718 ' 00:04:46.718 09:19:11 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:46.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.718 --rc genhtml_branch_coverage=1 00:04:46.718 --rc genhtml_function_coverage=1 00:04:46.718 --rc genhtml_legend=1 00:04:46.718 --rc geninfo_all_blocks=1 00:04:46.718 --rc geninfo_unexecuted_blocks=1 00:04:46.718 00:04:46.718 ' 00:04:46.718 09:19:11 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:46.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.718 --rc genhtml_branch_coverage=1 00:04:46.718 --rc genhtml_function_coverage=1 00:04:46.718 --rc genhtml_legend=1 00:04:46.718 --rc geninfo_all_blocks=1 00:04:46.718 --rc geninfo_unexecuted_blocks=1 00:04:46.718 00:04:46.718 ' 00:04:46.718 09:19:11 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:46.718 09:19:11 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:46.718 09:19:11 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:46.718 09:19:11 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:46.718 09:19:11 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:46.718 09:19:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.718 ************************************ 00:04:46.718 START TEST skip_rpc 00:04:46.718 ************************************ 00:04:46.718 09:19:11 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:46.718 09:19:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57056 00:04:46.718 09:19:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:46.718 09:19:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:46.718 09:19:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:46.976 [2024-10-16 09:19:11.143094] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:04:46.976 [2024-10-16 09:19:11.143191] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57056 ] 00:04:46.976 [2024-10-16 09:19:11.282098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.976 [2024-10-16 09:19:11.323530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.235 [2024-10-16 09:19:11.390279] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:52.545 09:19:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:52.545 09:19:16 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:52.545 09:19:16 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:52.545 09:19:16 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:52.545 09:19:16 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:52.545 09:19:16 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:52.545 09:19:16 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:52.545 09:19:16 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:52.545 09:19:16 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.545 09:19:16 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.545 09:19:16 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:52.545 09:19:16 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:52.545 09:19:16 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:52.545 09:19:16 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:52.545 09:19:16 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:52.545 09:19:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:52.545 09:19:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57056 00:04:52.545 09:19:16 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 57056 ']' 00:04:52.545 09:19:16 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 57056 00:04:52.545 09:19:16 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:52.545 09:19:16 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:52.545 09:19:16 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57056 00:04:52.545 09:19:16 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:52.545 09:19:16 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:52.545 09:19:16 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57056' 00:04:52.545 killing process with pid 57056 00:04:52.545 09:19:16 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 57056 00:04:52.545 09:19:16 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 57056 00:04:52.545 ************************************ 00:04:52.545 END TEST skip_rpc 00:04:52.545 ************************************ 00:04:52.545 00:04:52.545 real 0m5.429s 00:04:52.545 user 0m5.071s 00:04:52.545 sys 0m0.276s 00:04:52.545 09:19:16 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:52.545 09:19:16 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.545 09:19:16 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:52.545 09:19:16 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:52.545 09:19:16 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:52.545 09:19:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.545 ************************************ 00:04:52.545 START TEST skip_rpc_with_json 00:04:52.545 ************************************ 00:04:52.545 09:19:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:52.545 09:19:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:52.545 09:19:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57142 00:04:52.545 09:19:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:52.545 09:19:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57142 00:04:52.545 09:19:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:52.545 09:19:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 57142 ']' 00:04:52.545 09:19:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.545 09:19:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:52.545 09:19:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.545 09:19:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:52.545 09:19:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:52.545 [2024-10-16 09:19:16.625625] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:04:52.545 [2024-10-16 09:19:16.625731] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57142 ] 00:04:52.545 [2024-10-16 09:19:16.761733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.545 [2024-10-16 09:19:16.806314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.545 [2024-10-16 09:19:16.880506] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:52.803 09:19:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:52.803 09:19:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:52.803 09:19:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:52.803 09:19:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.803 09:19:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:52.803 [2024-10-16 09:19:17.075472] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:52.803 request: 00:04:52.803 { 00:04:52.803 "trtype": "tcp", 00:04:52.803 "method": "nvmf_get_transports", 00:04:52.803 "req_id": 1 00:04:52.803 } 00:04:52.803 Got JSON-RPC error response 00:04:52.803 response: 00:04:52.803 { 00:04:52.803 "code": -19, 00:04:52.803 "message": "No such device" 00:04:52.803 } 00:04:52.803 09:19:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:52.803 09:19:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:52.803 09:19:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.803 09:19:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:52.803 [2024-10-16 09:19:17.087609] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:52.803 09:19:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:52.803 09:19:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:52.803 09:19:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.803 09:19:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:53.061 09:19:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:53.061 09:19:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:53.061 { 00:04:53.061 "subsystems": [ 00:04:53.061 { 00:04:53.061 "subsystem": "fsdev", 00:04:53.061 "config": [ 00:04:53.061 { 00:04:53.061 "method": "fsdev_set_opts", 00:04:53.061 "params": { 00:04:53.061 "fsdev_io_pool_size": 65535, 00:04:53.061 "fsdev_io_cache_size": 256 00:04:53.061 } 00:04:53.061 } 00:04:53.061 ] 00:04:53.061 }, 00:04:53.061 { 00:04:53.061 "subsystem": "keyring", 00:04:53.061 "config": [] 00:04:53.061 }, 00:04:53.061 { 00:04:53.061 "subsystem": "iobuf", 00:04:53.061 "config": [ 00:04:53.061 { 00:04:53.061 "method": "iobuf_set_options", 00:04:53.061 "params": { 00:04:53.061 "small_pool_count": 8192, 00:04:53.061 "large_pool_count": 1024, 00:04:53.061 "small_bufsize": 8192, 00:04:53.061 "large_bufsize": 135168 00:04:53.061 } 00:04:53.061 } 00:04:53.061 ] 00:04:53.061 }, 00:04:53.061 { 00:04:53.061 "subsystem": "sock", 00:04:53.061 "config": [ 00:04:53.061 { 00:04:53.061 "method": "sock_set_default_impl", 00:04:53.061 "params": { 00:04:53.061 "impl_name": "uring" 00:04:53.061 } 00:04:53.061 }, 00:04:53.061 { 00:04:53.061 "method": "sock_impl_set_options", 00:04:53.061 "params": { 00:04:53.061 "impl_name": "ssl", 00:04:53.061 "recv_buf_size": 4096, 00:04:53.061 "send_buf_size": 4096, 00:04:53.061 "enable_recv_pipe": true, 00:04:53.061 "enable_quickack": false, 00:04:53.061 "enable_placement_id": 0, 00:04:53.061 "enable_zerocopy_send_server": true, 00:04:53.061 "enable_zerocopy_send_client": false, 00:04:53.061 "zerocopy_threshold": 0, 00:04:53.061 "tls_version": 0, 00:04:53.061 "enable_ktls": false 00:04:53.061 } 00:04:53.061 }, 00:04:53.061 { 00:04:53.061 "method": "sock_impl_set_options", 00:04:53.061 "params": { 00:04:53.061 "impl_name": "posix", 00:04:53.061 "recv_buf_size": 2097152, 00:04:53.061 "send_buf_size": 2097152, 00:04:53.061 "enable_recv_pipe": true, 00:04:53.061 "enable_quickack": false, 00:04:53.061 "enable_placement_id": 0, 00:04:53.061 "enable_zerocopy_send_server": true, 00:04:53.061 "enable_zerocopy_send_client": false, 00:04:53.061 "zerocopy_threshold": 0, 00:04:53.061 "tls_version": 0, 00:04:53.061 "enable_ktls": false 00:04:53.061 } 00:04:53.061 }, 00:04:53.061 { 00:04:53.061 "method": "sock_impl_set_options", 00:04:53.061 "params": { 00:04:53.061 "impl_name": "uring", 00:04:53.061 "recv_buf_size": 2097152, 00:04:53.061 "send_buf_size": 2097152, 00:04:53.061 "enable_recv_pipe": true, 00:04:53.061 "enable_quickack": false, 00:04:53.061 "enable_placement_id": 0, 00:04:53.061 "enable_zerocopy_send_server": false, 00:04:53.061 "enable_zerocopy_send_client": false, 00:04:53.061 "zerocopy_threshold": 0, 00:04:53.061 "tls_version": 0, 00:04:53.061 "enable_ktls": false 00:04:53.061 } 00:04:53.061 } 00:04:53.061 ] 00:04:53.061 }, 00:04:53.061 { 00:04:53.061 "subsystem": "vmd", 00:04:53.061 "config": [] 00:04:53.061 }, 00:04:53.061 { 00:04:53.061 "subsystem": "accel", 00:04:53.061 "config": [ 00:04:53.061 { 00:04:53.061 "method": "accel_set_options", 00:04:53.061 "params": { 00:04:53.061 "small_cache_size": 128, 00:04:53.061 "large_cache_size": 16, 00:04:53.061 "task_count": 2048, 00:04:53.061 "sequence_count": 2048, 00:04:53.061 "buf_count": 2048 00:04:53.061 } 00:04:53.061 } 00:04:53.061 ] 00:04:53.061 }, 00:04:53.061 { 00:04:53.061 "subsystem": "bdev", 00:04:53.061 "config": [ 00:04:53.061 { 00:04:53.061 "method": "bdev_set_options", 00:04:53.061 "params": { 00:04:53.061 "bdev_io_pool_size": 65535, 00:04:53.061 "bdev_io_cache_size": 256, 00:04:53.061 "bdev_auto_examine": true, 00:04:53.061 "iobuf_small_cache_size": 128, 00:04:53.062 "iobuf_large_cache_size": 16 00:04:53.062 } 00:04:53.062 }, 00:04:53.062 { 00:04:53.062 "method": "bdev_raid_set_options", 00:04:53.062 "params": { 00:04:53.062 "process_window_size_kb": 1024, 00:04:53.062 "process_max_bandwidth_mb_sec": 0 00:04:53.062 } 00:04:53.062 }, 00:04:53.062 { 00:04:53.062 "method": "bdev_iscsi_set_options", 00:04:53.062 "params": { 00:04:53.062 "timeout_sec": 30 00:04:53.062 } 00:04:53.062 }, 00:04:53.062 { 00:04:53.062 "method": "bdev_nvme_set_options", 00:04:53.062 "params": { 00:04:53.062 "action_on_timeout": "none", 00:04:53.062 "timeout_us": 0, 00:04:53.062 "timeout_admin_us": 0, 00:04:53.062 "keep_alive_timeout_ms": 10000, 00:04:53.062 "arbitration_burst": 0, 00:04:53.062 "low_priority_weight": 0, 00:04:53.062 "medium_priority_weight": 0, 00:04:53.062 "high_priority_weight": 0, 00:04:53.062 "nvme_adminq_poll_period_us": 10000, 00:04:53.062 "nvme_ioq_poll_period_us": 0, 00:04:53.062 "io_queue_requests": 0, 00:04:53.062 "delay_cmd_submit": true, 00:04:53.062 "transport_retry_count": 4, 00:04:53.062 "bdev_retry_count": 3, 00:04:53.062 "transport_ack_timeout": 0, 00:04:53.062 "ctrlr_loss_timeout_sec": 0, 00:04:53.062 "reconnect_delay_sec": 0, 00:04:53.062 "fast_io_fail_timeout_sec": 0, 00:04:53.062 "disable_auto_failback": false, 00:04:53.062 "generate_uuids": false, 00:04:53.062 "transport_tos": 0, 00:04:53.062 "nvme_error_stat": false, 00:04:53.062 "rdma_srq_size": 0, 00:04:53.062 "io_path_stat": false, 00:04:53.062 "allow_accel_sequence": false, 00:04:53.062 "rdma_max_cq_size": 0, 00:04:53.062 "rdma_cm_event_timeout_ms": 0, 00:04:53.062 "dhchap_digests": [ 00:04:53.062 "sha256", 00:04:53.062 "sha384", 00:04:53.062 "sha512" 00:04:53.062 ], 00:04:53.062 "dhchap_dhgroups": [ 00:04:53.062 "null", 00:04:53.062 "ffdhe2048", 00:04:53.062 "ffdhe3072", 00:04:53.062 "ffdhe4096", 00:04:53.062 "ffdhe6144", 00:04:53.062 "ffdhe8192" 00:04:53.062 ] 00:04:53.062 } 00:04:53.062 }, 00:04:53.062 { 00:04:53.062 "method": "bdev_nvme_set_hotplug", 00:04:53.062 "params": { 00:04:53.062 "period_us": 100000, 00:04:53.062 "enable": false 00:04:53.062 } 00:04:53.062 }, 00:04:53.062 { 00:04:53.062 "method": "bdev_wait_for_examine" 00:04:53.062 } 00:04:53.062 ] 00:04:53.062 }, 00:04:53.062 { 00:04:53.062 "subsystem": "scsi", 00:04:53.062 "config": null 00:04:53.062 }, 00:04:53.062 { 00:04:53.062 "subsystem": "scheduler", 00:04:53.062 "config": [ 00:04:53.062 { 00:04:53.062 "method": "framework_set_scheduler", 00:04:53.062 "params": { 00:04:53.062 "name": "static" 00:04:53.062 } 00:04:53.062 } 00:04:53.062 ] 00:04:53.062 }, 00:04:53.062 { 00:04:53.062 "subsystem": "vhost_scsi", 00:04:53.062 "config": [] 00:04:53.062 }, 00:04:53.062 { 00:04:53.062 "subsystem": "vhost_blk", 00:04:53.062 "config": [] 00:04:53.062 }, 00:04:53.062 { 00:04:53.062 "subsystem": "ublk", 00:04:53.062 "config": [] 00:04:53.062 }, 00:04:53.062 { 00:04:53.062 "subsystem": "nbd", 00:04:53.062 "config": [] 00:04:53.062 }, 00:04:53.062 { 00:04:53.062 "subsystem": "nvmf", 00:04:53.062 "config": [ 00:04:53.062 { 00:04:53.062 "method": "nvmf_set_config", 00:04:53.062 "params": { 00:04:53.062 "discovery_filter": "match_any", 00:04:53.062 "admin_cmd_passthru": { 00:04:53.062 "identify_ctrlr": false 00:04:53.062 }, 00:04:53.062 "dhchap_digests": [ 00:04:53.062 "sha256", 00:04:53.062 "sha384", 00:04:53.062 "sha512" 00:04:53.062 ], 00:04:53.062 "dhchap_dhgroups": [ 00:04:53.062 "null", 00:04:53.062 "ffdhe2048", 00:04:53.062 "ffdhe3072", 00:04:53.062 "ffdhe4096", 00:04:53.062 "ffdhe6144", 00:04:53.062 "ffdhe8192" 00:04:53.062 ] 00:04:53.062 } 00:04:53.062 }, 00:04:53.062 { 00:04:53.062 "method": "nvmf_set_max_subsystems", 00:04:53.062 "params": { 00:04:53.062 "max_subsystems": 1024 00:04:53.062 } 00:04:53.062 }, 00:04:53.062 { 00:04:53.062 "method": "nvmf_set_crdt", 00:04:53.062 "params": { 00:04:53.062 "crdt1": 0, 00:04:53.062 "crdt2": 0, 00:04:53.062 "crdt3": 0 00:04:53.062 } 00:04:53.062 }, 00:04:53.062 { 00:04:53.062 "method": "nvmf_create_transport", 00:04:53.062 "params": { 00:04:53.062 "trtype": "TCP", 00:04:53.062 "max_queue_depth": 128, 00:04:53.062 "max_io_qpairs_per_ctrlr": 127, 00:04:53.062 "in_capsule_data_size": 4096, 00:04:53.062 "max_io_size": 131072, 00:04:53.062 "io_unit_size": 131072, 00:04:53.062 "max_aq_depth": 128, 00:04:53.062 "num_shared_buffers": 511, 00:04:53.062 "buf_cache_size": 4294967295, 00:04:53.062 "dif_insert_or_strip": false, 00:04:53.062 "zcopy": false, 00:04:53.062 "c2h_success": true, 00:04:53.062 "sock_priority": 0, 00:04:53.062 "abort_timeout_sec": 1, 00:04:53.062 "ack_timeout": 0, 00:04:53.062 "data_wr_pool_size": 0 00:04:53.062 } 00:04:53.062 } 00:04:53.062 ] 00:04:53.062 }, 00:04:53.062 { 00:04:53.062 "subsystem": "iscsi", 00:04:53.062 "config": [ 00:04:53.062 { 00:04:53.062 "method": "iscsi_set_options", 00:04:53.062 "params": { 00:04:53.062 "node_base": "iqn.2016-06.io.spdk", 00:04:53.062 "max_sessions": 128, 00:04:53.062 "max_connections_per_session": 2, 00:04:53.062 "max_queue_depth": 64, 00:04:53.062 "default_time2wait": 2, 00:04:53.062 "default_time2retain": 20, 00:04:53.062 "first_burst_length": 8192, 00:04:53.062 "immediate_data": true, 00:04:53.062 "allow_duplicated_isid": false, 00:04:53.062 "error_recovery_level": 0, 00:04:53.062 "nop_timeout": 60, 00:04:53.062 "nop_in_interval": 30, 00:04:53.062 "disable_chap": false, 00:04:53.062 "require_chap": false, 00:04:53.062 "mutual_chap": false, 00:04:53.062 "chap_group": 0, 00:04:53.062 "max_large_datain_per_connection": 64, 00:04:53.062 "max_r2t_per_connection": 4, 00:04:53.062 "pdu_pool_size": 36864, 00:04:53.062 "immediate_data_pool_size": 16384, 00:04:53.062 "data_out_pool_size": 2048 00:04:53.062 } 00:04:53.062 } 00:04:53.062 ] 00:04:53.062 } 00:04:53.062 ] 00:04:53.062 } 00:04:53.062 09:19:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:53.062 09:19:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57142 00:04:53.062 09:19:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57142 ']' 00:04:53.062 09:19:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57142 00:04:53.062 09:19:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:53.062 09:19:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:53.062 09:19:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57142 00:04:53.062 killing process with pid 57142 00:04:53.062 09:19:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:53.062 09:19:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:53.062 09:19:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57142' 00:04:53.062 09:19:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57142 00:04:53.062 09:19:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57142 00:04:53.319 09:19:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57162 00:04:53.319 09:19:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:53.319 09:19:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:58.587 09:19:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57162 00:04:58.587 09:19:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57162 ']' 00:04:58.587 09:19:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57162 00:04:58.587 09:19:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:58.587 09:19:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:58.587 09:19:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57162 00:04:58.587 killing process with pid 57162 00:04:58.587 09:19:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:58.587 09:19:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:58.587 09:19:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57162' 00:04:58.587 09:19:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57162 00:04:58.587 09:19:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57162 00:04:58.846 09:19:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:58.846 09:19:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:58.846 ************************************ 00:04:58.846 END TEST skip_rpc_with_json 00:04:58.846 ************************************ 00:04:58.846 00:04:58.846 real 0m6.530s 00:04:58.846 user 0m6.065s 00:04:58.846 sys 0m0.642s 00:04:58.846 09:19:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:58.846 09:19:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:58.846 09:19:23 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:58.846 09:19:23 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:58.846 09:19:23 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:58.846 09:19:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.846 ************************************ 00:04:58.846 START TEST skip_rpc_with_delay 00:04:58.846 ************************************ 00:04:58.846 09:19:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:58.846 09:19:23 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:58.846 09:19:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:58.846 09:19:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:58.846 09:19:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:58.846 09:19:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:58.846 09:19:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:58.846 09:19:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:58.846 09:19:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:58.846 09:19:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:58.846 09:19:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:58.846 09:19:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:58.846 09:19:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:58.846 [2024-10-16 09:19:23.218412] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:58.846 09:19:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:58.846 09:19:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:58.846 09:19:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:58.846 09:19:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:58.846 00:04:58.846 real 0m0.101s 00:04:58.846 user 0m0.075s 00:04:58.846 sys 0m0.024s 00:04:58.846 ************************************ 00:04:58.846 END TEST skip_rpc_with_delay 00:04:58.846 ************************************ 00:04:58.846 09:19:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:58.846 09:19:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:59.104 09:19:23 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:59.104 09:19:23 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:59.104 09:19:23 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:59.104 09:19:23 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:59.104 09:19:23 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.104 09:19:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.104 ************************************ 00:04:59.104 START TEST exit_on_failed_rpc_init 00:04:59.104 ************************************ 00:04:59.104 09:19:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:59.104 09:19:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57272 00:04:59.104 09:19:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57272 00:04:59.104 09:19:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:59.104 09:19:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 57272 ']' 00:04:59.104 09:19:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.104 09:19:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:59.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.104 09:19:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.104 09:19:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:59.104 09:19:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:59.104 [2024-10-16 09:19:23.364869] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:04:59.104 [2024-10-16 09:19:23.365186] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57272 ] 00:04:59.105 [2024-10-16 09:19:23.506000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.363 [2024-10-16 09:19:23.571509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.363 [2024-10-16 09:19:23.651707] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:59.621 09:19:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:59.621 09:19:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:59.621 09:19:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:59.621 09:19:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:59.621 09:19:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:59.621 09:19:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:59.621 09:19:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:59.621 09:19:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:59.621 09:19:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:59.621 09:19:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:59.621 09:19:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:59.621 09:19:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:59.621 09:19:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:59.621 09:19:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:59.621 09:19:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:59.621 [2024-10-16 09:19:23.935449] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:04:59.621 [2024-10-16 09:19:23.935743] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57282 ] 00:04:59.880 [2024-10-16 09:19:24.076534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.880 [2024-10-16 09:19:24.136569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.880 [2024-10-16 09:19:24.136668] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:59.880 [2024-10-16 09:19:24.136692] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:59.880 [2024-10-16 09:19:24.136704] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:59.880 09:19:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:59.880 09:19:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:59.880 09:19:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:59.880 09:19:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:59.880 09:19:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:59.880 09:19:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:59.880 09:19:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:59.880 09:19:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57272 00:04:59.880 09:19:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 57272 ']' 00:04:59.880 09:19:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 57272 00:04:59.880 09:19:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:59.880 09:19:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:59.880 09:19:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57272 00:04:59.880 killing process with pid 57272 00:04:59.880 09:19:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:59.880 09:19:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:59.880 09:19:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57272' 00:04:59.880 09:19:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 57272 00:04:59.880 09:19:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 57272 00:05:00.446 00:05:00.446 real 0m1.319s 00:05:00.446 user 0m1.378s 00:05:00.446 sys 0m0.408s 00:05:00.446 09:19:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:00.446 09:19:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:00.446 ************************************ 00:05:00.446 END TEST exit_on_failed_rpc_init 00:05:00.446 ************************************ 00:05:00.446 09:19:24 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:00.446 ************************************ 00:05:00.446 END TEST skip_rpc 00:05:00.446 ************************************ 00:05:00.446 00:05:00.446 real 0m13.787s 00:05:00.446 user 0m12.776s 00:05:00.446 sys 0m1.561s 00:05:00.446 09:19:24 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:00.446 09:19:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.446 09:19:24 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:00.446 09:19:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:00.446 09:19:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.446 09:19:24 -- common/autotest_common.sh@10 -- # set +x 00:05:00.446 ************************************ 00:05:00.446 START TEST rpc_client 00:05:00.446 ************************************ 00:05:00.446 09:19:24 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:00.446 * Looking for test storage... 00:05:00.446 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:00.446 09:19:24 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:00.446 09:19:24 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:05:00.446 09:19:24 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:00.730 09:19:24 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:00.730 09:19:24 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.730 09:19:24 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.730 09:19:24 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.730 09:19:24 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.730 09:19:24 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.730 09:19:24 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.730 09:19:24 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.730 09:19:24 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.730 09:19:24 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.730 09:19:24 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.730 09:19:24 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.730 09:19:24 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:00.730 09:19:24 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:00.730 09:19:24 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.730 09:19:24 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.730 09:19:24 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:00.730 09:19:24 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:00.730 09:19:24 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.730 09:19:24 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:00.730 09:19:24 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.730 09:19:24 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:00.730 09:19:24 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:00.730 09:19:24 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.730 09:19:24 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:00.730 09:19:24 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:00.730 09:19:24 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:00.730 09:19:24 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:00.730 09:19:24 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:00.730 09:19:24 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.730 09:19:24 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:00.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.730 --rc genhtml_branch_coverage=1 00:05:00.730 --rc genhtml_function_coverage=1 00:05:00.730 --rc genhtml_legend=1 00:05:00.730 --rc geninfo_all_blocks=1 00:05:00.730 --rc geninfo_unexecuted_blocks=1 00:05:00.730 00:05:00.730 ' 00:05:00.730 09:19:24 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:00.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.730 --rc genhtml_branch_coverage=1 00:05:00.730 --rc genhtml_function_coverage=1 00:05:00.730 --rc genhtml_legend=1 00:05:00.730 --rc geninfo_all_blocks=1 00:05:00.730 --rc geninfo_unexecuted_blocks=1 00:05:00.730 00:05:00.730 ' 00:05:00.730 09:19:24 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:00.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.730 --rc genhtml_branch_coverage=1 00:05:00.730 --rc genhtml_function_coverage=1 00:05:00.730 --rc genhtml_legend=1 00:05:00.730 --rc geninfo_all_blocks=1 00:05:00.730 --rc geninfo_unexecuted_blocks=1 00:05:00.730 00:05:00.730 ' 00:05:00.730 09:19:24 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:00.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.730 --rc genhtml_branch_coverage=1 00:05:00.730 --rc genhtml_function_coverage=1 00:05:00.730 --rc genhtml_legend=1 00:05:00.730 --rc geninfo_all_blocks=1 00:05:00.730 --rc geninfo_unexecuted_blocks=1 00:05:00.730 00:05:00.730 ' 00:05:00.730 09:19:24 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:00.730 OK 00:05:00.730 09:19:24 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:00.730 00:05:00.730 real 0m0.256s 00:05:00.730 user 0m0.169s 00:05:00.730 sys 0m0.095s 00:05:00.730 09:19:24 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:00.730 09:19:24 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:00.730 ************************************ 00:05:00.730 END TEST rpc_client 00:05:00.730 ************************************ 00:05:00.730 09:19:25 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:00.731 09:19:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:00.731 09:19:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.731 09:19:25 -- common/autotest_common.sh@10 -- # set +x 00:05:00.731 ************************************ 00:05:00.731 START TEST json_config 00:05:00.731 ************************************ 00:05:00.731 09:19:25 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:00.731 09:19:25 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:00.731 09:19:25 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:05:00.731 09:19:25 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:01.002 09:19:25 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:01.002 09:19:25 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.002 09:19:25 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.002 09:19:25 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.002 09:19:25 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.002 09:19:25 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.002 09:19:25 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.002 09:19:25 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.002 09:19:25 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.002 09:19:25 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.002 09:19:25 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.002 09:19:25 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.002 09:19:25 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:01.002 09:19:25 json_config -- scripts/common.sh@345 -- # : 1 00:05:01.002 09:19:25 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.002 09:19:25 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.002 09:19:25 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:01.002 09:19:25 json_config -- scripts/common.sh@353 -- # local d=1 00:05:01.002 09:19:25 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.002 09:19:25 json_config -- scripts/common.sh@355 -- # echo 1 00:05:01.002 09:19:25 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.002 09:19:25 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:01.002 09:19:25 json_config -- scripts/common.sh@353 -- # local d=2 00:05:01.002 09:19:25 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.002 09:19:25 json_config -- scripts/common.sh@355 -- # echo 2 00:05:01.002 09:19:25 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.002 09:19:25 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.002 09:19:25 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.002 09:19:25 json_config -- scripts/common.sh@368 -- # return 0 00:05:01.002 09:19:25 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.002 09:19:25 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:01.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.002 --rc genhtml_branch_coverage=1 00:05:01.002 --rc genhtml_function_coverage=1 00:05:01.002 --rc genhtml_legend=1 00:05:01.002 --rc geninfo_all_blocks=1 00:05:01.002 --rc geninfo_unexecuted_blocks=1 00:05:01.002 00:05:01.002 ' 00:05:01.002 09:19:25 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:01.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.002 --rc genhtml_branch_coverage=1 00:05:01.002 --rc genhtml_function_coverage=1 00:05:01.002 --rc genhtml_legend=1 00:05:01.002 --rc geninfo_all_blocks=1 00:05:01.002 --rc geninfo_unexecuted_blocks=1 00:05:01.002 00:05:01.002 ' 00:05:01.002 09:19:25 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:01.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.002 --rc genhtml_branch_coverage=1 00:05:01.002 --rc genhtml_function_coverage=1 00:05:01.002 --rc genhtml_legend=1 00:05:01.002 --rc geninfo_all_blocks=1 00:05:01.002 --rc geninfo_unexecuted_blocks=1 00:05:01.002 00:05:01.002 ' 00:05:01.002 09:19:25 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:01.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.002 --rc genhtml_branch_coverage=1 00:05:01.002 --rc genhtml_function_coverage=1 00:05:01.002 --rc genhtml_legend=1 00:05:01.002 --rc geninfo_all_blocks=1 00:05:01.002 --rc geninfo_unexecuted_blocks=1 00:05:01.002 00:05:01.002 ' 00:05:01.002 09:19:25 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:01.002 09:19:25 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:01.002 09:19:25 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:01.002 09:19:25 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:01.002 09:19:25 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:01.002 09:19:25 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:01.002 09:19:25 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:01.002 09:19:25 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:01.002 09:19:25 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:01.002 09:19:25 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:01.002 09:19:25 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:01.002 09:19:25 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:01.002 09:19:25 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:05:01.003 09:19:25 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5989d9e2-d339-420e-a2f4-bd87604f111f 00:05:01.003 09:19:25 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:01.003 09:19:25 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:01.003 09:19:25 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:01.003 09:19:25 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:01.003 09:19:25 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:01.003 09:19:25 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:01.003 09:19:25 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:01.003 09:19:25 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:01.003 09:19:25 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:01.003 09:19:25 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.003 09:19:25 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.003 09:19:25 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.003 09:19:25 json_config -- paths/export.sh@5 -- # export PATH 00:05:01.003 09:19:25 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.003 09:19:25 json_config -- nvmf/common.sh@51 -- # : 0 00:05:01.003 09:19:25 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:01.003 09:19:25 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:01.003 09:19:25 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:01.003 09:19:25 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:01.003 09:19:25 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:01.003 09:19:25 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:01.003 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:01.003 09:19:25 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:01.003 09:19:25 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:01.003 09:19:25 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:01.003 09:19:25 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:01.003 09:19:25 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:01.003 09:19:25 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:01.003 09:19:25 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:01.003 09:19:25 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:01.003 09:19:25 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:01.003 09:19:25 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:01.003 09:19:25 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:01.003 09:19:25 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:01.003 09:19:25 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:01.003 09:19:25 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:01.003 09:19:25 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:01.003 09:19:25 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:01.003 09:19:25 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:01.003 09:19:25 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:01.003 09:19:25 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:01.003 INFO: JSON configuration test init 00:05:01.003 09:19:25 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:01.003 09:19:25 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:01.003 09:19:25 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:01.003 09:19:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.003 09:19:25 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:01.003 09:19:25 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:01.003 09:19:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.003 Waiting for target to run... 00:05:01.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:01.003 09:19:25 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:01.003 09:19:25 json_config -- json_config/common.sh@9 -- # local app=target 00:05:01.003 09:19:25 json_config -- json_config/common.sh@10 -- # shift 00:05:01.003 09:19:25 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:01.003 09:19:25 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:01.003 09:19:25 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:01.003 09:19:25 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:01.003 09:19:25 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:01.003 09:19:25 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57422 00:05:01.003 09:19:25 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:01.003 09:19:25 json_config -- json_config/common.sh@25 -- # waitforlisten 57422 /var/tmp/spdk_tgt.sock 00:05:01.003 09:19:25 json_config -- common/autotest_common.sh@831 -- # '[' -z 57422 ']' 00:05:01.003 09:19:25 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:01.003 09:19:25 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:01.003 09:19:25 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:01.003 09:19:25 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:01.003 09:19:25 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:01.003 09:19:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.003 [2024-10-16 09:19:25.308348] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:05:01.003 [2024-10-16 09:19:25.308705] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57422 ] 00:05:01.570 [2024-10-16 09:19:25.753350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.570 [2024-10-16 09:19:25.808885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.137 09:19:26 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:02.137 09:19:26 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:02.137 09:19:26 json_config -- json_config/common.sh@26 -- # echo '' 00:05:02.137 00:05:02.137 09:19:26 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:02.137 09:19:26 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:02.137 09:19:26 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:02.137 09:19:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.137 09:19:26 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:02.137 09:19:26 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:02.137 09:19:26 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:02.137 09:19:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.137 09:19:26 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:02.138 09:19:26 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:02.138 09:19:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:02.396 [2024-10-16 09:19:26.761043] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:02.654 09:19:26 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:02.654 09:19:26 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:02.654 09:19:26 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:02.654 09:19:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.654 09:19:26 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:02.654 09:19:26 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:02.654 09:19:26 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:02.654 09:19:26 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:02.654 09:19:26 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:02.654 09:19:26 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:02.654 09:19:26 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:02.654 09:19:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:02.912 09:19:27 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:02.912 09:19:27 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:02.912 09:19:27 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:02.912 09:19:27 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:02.912 09:19:27 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:02.912 09:19:27 json_config -- json_config/json_config.sh@54 -- # sort 00:05:02.912 09:19:27 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:02.912 09:19:27 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:02.912 09:19:27 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:02.912 09:19:27 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:02.912 09:19:27 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:02.912 09:19:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.169 09:19:27 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:03.169 09:19:27 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:03.169 09:19:27 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:03.169 09:19:27 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:03.169 09:19:27 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:03.169 09:19:27 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:03.169 09:19:27 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:03.169 09:19:27 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:03.169 09:19:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.169 09:19:27 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:03.169 09:19:27 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:03.169 09:19:27 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:03.169 09:19:27 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:03.169 09:19:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:03.426 MallocForNvmf0 00:05:03.426 09:19:27 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:03.426 09:19:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:03.683 MallocForNvmf1 00:05:03.683 09:19:27 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:03.683 09:19:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:03.940 [2024-10-16 09:19:28.244235] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:03.940 09:19:28 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:03.940 09:19:28 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:04.198 09:19:28 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:04.198 09:19:28 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:04.796 09:19:28 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:04.796 09:19:28 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:04.796 09:19:29 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:04.796 09:19:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:05.055 [2024-10-16 09:19:29.440896] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:05.055 09:19:29 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:05.055 09:19:29 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:05.055 09:19:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.314 09:19:29 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:05.314 09:19:29 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:05.314 09:19:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.314 09:19:29 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:05.314 09:19:29 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:05.314 09:19:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:05.572 MallocBdevForConfigChangeCheck 00:05:05.572 09:19:29 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:05.572 09:19:29 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:05.572 09:19:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.572 09:19:29 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:05.572 09:19:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:06.140 INFO: shutting down applications... 00:05:06.140 09:19:30 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:06.140 09:19:30 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:06.140 09:19:30 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:06.140 09:19:30 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:06.140 09:19:30 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:06.399 Calling clear_iscsi_subsystem 00:05:06.399 Calling clear_nvmf_subsystem 00:05:06.399 Calling clear_nbd_subsystem 00:05:06.399 Calling clear_ublk_subsystem 00:05:06.399 Calling clear_vhost_blk_subsystem 00:05:06.399 Calling clear_vhost_scsi_subsystem 00:05:06.399 Calling clear_bdev_subsystem 00:05:06.399 09:19:30 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:06.399 09:19:30 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:06.399 09:19:30 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:06.399 09:19:30 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:06.399 09:19:30 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:06.399 09:19:30 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:06.966 09:19:31 json_config -- json_config/json_config.sh@352 -- # break 00:05:06.966 09:19:31 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:06.966 09:19:31 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:06.966 09:19:31 json_config -- json_config/common.sh@31 -- # local app=target 00:05:06.966 09:19:31 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:06.966 09:19:31 json_config -- json_config/common.sh@35 -- # [[ -n 57422 ]] 00:05:06.966 09:19:31 json_config -- json_config/common.sh@38 -- # kill -SIGINT 57422 00:05:06.966 09:19:31 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:06.966 09:19:31 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:06.966 09:19:31 json_config -- json_config/common.sh@41 -- # kill -0 57422 00:05:06.966 09:19:31 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:07.225 09:19:31 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:07.225 09:19:31 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:07.225 09:19:31 json_config -- json_config/common.sh@41 -- # kill -0 57422 00:05:07.225 09:19:31 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:07.225 09:19:31 json_config -- json_config/common.sh@43 -- # break 00:05:07.225 09:19:31 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:07.225 SPDK target shutdown done 00:05:07.225 09:19:31 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:07.225 09:19:31 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:07.225 INFO: relaunching applications... 00:05:07.225 09:19:31 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:07.225 09:19:31 json_config -- json_config/common.sh@9 -- # local app=target 00:05:07.225 09:19:31 json_config -- json_config/common.sh@10 -- # shift 00:05:07.225 09:19:31 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:07.225 09:19:31 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:07.225 09:19:31 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:07.225 09:19:31 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:07.225 09:19:31 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:07.225 09:19:31 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57623 00:05:07.225 09:19:31 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:07.225 Waiting for target to run... 00:05:07.225 09:19:31 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:07.225 09:19:31 json_config -- json_config/common.sh@25 -- # waitforlisten 57623 /var/tmp/spdk_tgt.sock 00:05:07.225 09:19:31 json_config -- common/autotest_common.sh@831 -- # '[' -z 57623 ']' 00:05:07.225 09:19:31 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:07.225 09:19:31 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:07.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:07.225 09:19:31 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:07.225 09:19:31 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:07.225 09:19:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.484 [2024-10-16 09:19:31.667669] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:05:07.484 [2024-10-16 09:19:31.667777] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57623 ] 00:05:07.744 [2024-10-16 09:19:32.111148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.003 [2024-10-16 09:19:32.157439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.003 [2024-10-16 09:19:32.295072] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:08.263 [2024-10-16 09:19:32.512371] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:08.263 [2024-10-16 09:19:32.544441] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:08.522 09:19:32 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:08.522 00:05:08.522 09:19:32 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:08.522 09:19:32 json_config -- json_config/common.sh@26 -- # echo '' 00:05:08.522 09:19:32 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:08.522 INFO: Checking if target configuration is the same... 00:05:08.522 09:19:32 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:08.522 09:19:32 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:08.522 09:19:32 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:08.522 09:19:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:08.522 + '[' 2 -ne 2 ']' 00:05:08.522 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:08.522 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:08.522 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:08.522 +++ basename /dev/fd/62 00:05:08.522 ++ mktemp /tmp/62.XXX 00:05:08.522 + tmp_file_1=/tmp/62.dpB 00:05:08.522 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:08.522 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:08.522 + tmp_file_2=/tmp/spdk_tgt_config.json.vzU 00:05:08.522 + ret=0 00:05:08.522 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:09.090 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:09.090 + diff -u /tmp/62.dpB /tmp/spdk_tgt_config.json.vzU 00:05:09.090 INFO: JSON config files are the same 00:05:09.090 + echo 'INFO: JSON config files are the same' 00:05:09.090 + rm /tmp/62.dpB /tmp/spdk_tgt_config.json.vzU 00:05:09.090 + exit 0 00:05:09.090 09:19:33 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:09.090 INFO: changing configuration and checking if this can be detected... 00:05:09.090 09:19:33 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:09.090 09:19:33 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:09.090 09:19:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:09.361 09:19:33 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:09.361 09:19:33 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:09.361 09:19:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:09.361 + '[' 2 -ne 2 ']' 00:05:09.361 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:09.361 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:09.361 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:09.361 +++ basename /dev/fd/62 00:05:09.361 ++ mktemp /tmp/62.XXX 00:05:09.361 + tmp_file_1=/tmp/62.cqO 00:05:09.361 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:09.361 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:09.361 + tmp_file_2=/tmp/spdk_tgt_config.json.neY 00:05:09.361 + ret=0 00:05:09.361 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:09.636 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:09.636 + diff -u /tmp/62.cqO /tmp/spdk_tgt_config.json.neY 00:05:09.636 + ret=1 00:05:09.636 + echo '=== Start of file: /tmp/62.cqO ===' 00:05:09.636 + cat /tmp/62.cqO 00:05:09.636 + echo '=== End of file: /tmp/62.cqO ===' 00:05:09.636 + echo '' 00:05:09.636 + echo '=== Start of file: /tmp/spdk_tgt_config.json.neY ===' 00:05:09.636 + cat /tmp/spdk_tgt_config.json.neY 00:05:09.636 + echo '=== End of file: /tmp/spdk_tgt_config.json.neY ===' 00:05:09.636 + echo '' 00:05:09.636 + rm /tmp/62.cqO /tmp/spdk_tgt_config.json.neY 00:05:09.636 + exit 1 00:05:09.636 INFO: configuration change detected. 00:05:09.636 09:19:33 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:09.636 09:19:33 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:09.636 09:19:33 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:09.636 09:19:33 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:09.636 09:19:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.636 09:19:33 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:09.636 09:19:33 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:09.636 09:19:33 json_config -- json_config/json_config.sh@324 -- # [[ -n 57623 ]] 00:05:09.636 09:19:33 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:09.636 09:19:33 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:09.636 09:19:33 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:09.636 09:19:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.636 09:19:34 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:09.636 09:19:34 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:09.636 09:19:34 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:09.636 09:19:34 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:09.636 09:19:34 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:09.636 09:19:34 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:09.636 09:19:34 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:09.636 09:19:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.896 09:19:34 json_config -- json_config/json_config.sh@330 -- # killprocess 57623 00:05:09.896 09:19:34 json_config -- common/autotest_common.sh@950 -- # '[' -z 57623 ']' 00:05:09.896 09:19:34 json_config -- common/autotest_common.sh@954 -- # kill -0 57623 00:05:09.896 09:19:34 json_config -- common/autotest_common.sh@955 -- # uname 00:05:09.896 09:19:34 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:09.896 09:19:34 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57623 00:05:09.896 killing process with pid 57623 00:05:09.896 09:19:34 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:09.896 09:19:34 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:09.896 09:19:34 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57623' 00:05:09.896 09:19:34 json_config -- common/autotest_common.sh@969 -- # kill 57623 00:05:09.896 09:19:34 json_config -- common/autotest_common.sh@974 -- # wait 57623 00:05:09.896 09:19:34 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:09.896 09:19:34 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:09.896 09:19:34 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:09.896 09:19:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.155 INFO: Success 00:05:10.155 09:19:34 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:10.155 09:19:34 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:10.155 ************************************ 00:05:10.155 END TEST json_config 00:05:10.155 ************************************ 00:05:10.155 00:05:10.155 real 0m9.315s 00:05:10.155 user 0m13.589s 00:05:10.155 sys 0m1.865s 00:05:10.155 09:19:34 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:10.155 09:19:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.155 09:19:34 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:10.155 09:19:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:10.155 09:19:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.155 09:19:34 -- common/autotest_common.sh@10 -- # set +x 00:05:10.155 ************************************ 00:05:10.155 START TEST json_config_extra_key 00:05:10.155 ************************************ 00:05:10.155 09:19:34 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:10.155 09:19:34 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:10.155 09:19:34 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:05:10.155 09:19:34 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:10.155 09:19:34 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:10.155 09:19:34 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:10.155 09:19:34 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:10.155 09:19:34 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:10.155 09:19:34 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.155 09:19:34 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:10.155 09:19:34 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:10.155 09:19:34 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:10.155 09:19:34 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:10.155 09:19:34 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:10.155 09:19:34 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:10.155 09:19:34 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:10.156 09:19:34 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:10.156 09:19:34 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:10.156 09:19:34 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:10.156 09:19:34 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.156 09:19:34 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:10.156 09:19:34 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:10.156 09:19:34 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.156 09:19:34 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:10.156 09:19:34 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:10.156 09:19:34 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:10.156 09:19:34 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:10.156 09:19:34 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.156 09:19:34 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:10.156 09:19:34 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:10.156 09:19:34 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:10.156 09:19:34 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:10.156 09:19:34 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:10.156 09:19:34 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.156 09:19:34 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:10.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.156 --rc genhtml_branch_coverage=1 00:05:10.156 --rc genhtml_function_coverage=1 00:05:10.156 --rc genhtml_legend=1 00:05:10.156 --rc geninfo_all_blocks=1 00:05:10.156 --rc geninfo_unexecuted_blocks=1 00:05:10.156 00:05:10.156 ' 00:05:10.156 09:19:34 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:10.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.156 --rc genhtml_branch_coverage=1 00:05:10.156 --rc genhtml_function_coverage=1 00:05:10.156 --rc genhtml_legend=1 00:05:10.156 --rc geninfo_all_blocks=1 00:05:10.156 --rc geninfo_unexecuted_blocks=1 00:05:10.156 00:05:10.156 ' 00:05:10.156 09:19:34 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:10.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.156 --rc genhtml_branch_coverage=1 00:05:10.156 --rc genhtml_function_coverage=1 00:05:10.156 --rc genhtml_legend=1 00:05:10.156 --rc geninfo_all_blocks=1 00:05:10.156 --rc geninfo_unexecuted_blocks=1 00:05:10.156 00:05:10.156 ' 00:05:10.156 09:19:34 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:10.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.156 --rc genhtml_branch_coverage=1 00:05:10.156 --rc genhtml_function_coverage=1 00:05:10.156 --rc genhtml_legend=1 00:05:10.156 --rc geninfo_all_blocks=1 00:05:10.156 --rc geninfo_unexecuted_blocks=1 00:05:10.156 00:05:10.156 ' 00:05:10.156 09:19:34 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:10.156 09:19:34 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:10.156 09:19:34 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:10.156 09:19:34 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:10.156 09:19:34 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:10.156 09:19:34 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:10.156 09:19:34 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:10.156 09:19:34 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:10.156 09:19:34 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:10.156 09:19:34 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:10.156 09:19:34 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:10.156 09:19:34 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:10.156 09:19:34 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:05:10.156 09:19:34 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5989d9e2-d339-420e-a2f4-bd87604f111f 00:05:10.156 09:19:34 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:10.156 09:19:34 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:10.156 09:19:34 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:10.156 09:19:34 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:10.156 09:19:34 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:10.156 09:19:34 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:10.156 09:19:34 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:10.156 09:19:34 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:10.156 09:19:34 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:10.156 09:19:34 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.156 09:19:34 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.156 09:19:34 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.156 09:19:34 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:10.156 09:19:34 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.156 09:19:34 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:10.156 09:19:34 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:10.156 09:19:34 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:10.156 09:19:34 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:10.156 09:19:34 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:10.156 09:19:34 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:10.156 09:19:34 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:10.156 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:10.156 09:19:34 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:10.156 09:19:34 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:10.156 09:19:34 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:10.156 09:19:34 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:10.156 09:19:34 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:10.156 09:19:34 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:10.156 09:19:34 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:10.156 09:19:34 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:10.156 09:19:34 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:10.156 09:19:34 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:10.415 INFO: launching applications... 00:05:10.416 09:19:34 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:10.416 09:19:34 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:10.416 09:19:34 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:10.416 09:19:34 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:10.416 09:19:34 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:10.416 09:19:34 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:10.416 09:19:34 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:10.416 09:19:34 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:10.416 09:19:34 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:10.416 09:19:34 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:10.416 09:19:34 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:10.416 09:19:34 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:10.416 09:19:34 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57777 00:05:10.416 09:19:34 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:10.416 Waiting for target to run... 00:05:10.416 09:19:34 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57777 /var/tmp/spdk_tgt.sock 00:05:10.416 09:19:34 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:10.416 09:19:34 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 57777 ']' 00:05:10.416 09:19:34 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:10.416 09:19:34 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:10.416 09:19:34 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:10.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:10.416 09:19:34 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:10.416 09:19:34 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:10.416 [2024-10-16 09:19:34.620472] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:05:10.416 [2024-10-16 09:19:34.620602] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57777 ] 00:05:10.675 [2024-10-16 09:19:35.030157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.675 [2024-10-16 09:19:35.078391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.934 [2024-10-16 09:19:35.112972] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:11.501 00:05:11.501 INFO: shutting down applications... 00:05:11.501 09:19:35 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:11.501 09:19:35 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:11.501 09:19:35 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:11.501 09:19:35 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:11.501 09:19:35 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:11.501 09:19:35 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:11.501 09:19:35 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:11.501 09:19:35 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57777 ]] 00:05:11.501 09:19:35 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57777 00:05:11.501 09:19:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:11.501 09:19:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:11.501 09:19:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57777 00:05:11.501 09:19:35 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:12.069 09:19:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:12.069 09:19:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:12.069 09:19:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57777 00:05:12.069 09:19:36 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:12.069 09:19:36 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:12.069 09:19:36 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:12.069 09:19:36 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:12.069 SPDK target shutdown done 00:05:12.069 09:19:36 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:12.069 Success 00:05:12.069 00:05:12.069 real 0m1.791s 00:05:12.069 user 0m1.757s 00:05:12.069 sys 0m0.418s 00:05:12.069 09:19:36 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.069 09:19:36 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:12.069 ************************************ 00:05:12.069 END TEST json_config_extra_key 00:05:12.069 ************************************ 00:05:12.069 09:19:36 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:12.069 09:19:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:12.069 09:19:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.069 09:19:36 -- common/autotest_common.sh@10 -- # set +x 00:05:12.069 ************************************ 00:05:12.069 START TEST alias_rpc 00:05:12.069 ************************************ 00:05:12.069 09:19:36 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:12.069 * Looking for test storage... 00:05:12.069 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:12.069 09:19:36 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:12.069 09:19:36 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:12.069 09:19:36 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:12.069 09:19:36 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:12.069 09:19:36 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.069 09:19:36 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.069 09:19:36 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.069 09:19:36 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.069 09:19:36 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.069 09:19:36 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.069 09:19:36 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.069 09:19:36 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.069 09:19:36 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.069 09:19:36 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.069 09:19:36 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.069 09:19:36 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:12.069 09:19:36 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:12.069 09:19:36 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.069 09:19:36 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.069 09:19:36 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:12.069 09:19:36 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:12.069 09:19:36 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.069 09:19:36 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:12.069 09:19:36 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.069 09:19:36 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:12.069 09:19:36 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:12.069 09:19:36 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.069 09:19:36 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:12.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.069 09:19:36 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.069 09:19:36 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.069 09:19:36 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.069 09:19:36 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:12.069 09:19:36 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.069 09:19:36 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:12.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.069 --rc genhtml_branch_coverage=1 00:05:12.069 --rc genhtml_function_coverage=1 00:05:12.069 --rc genhtml_legend=1 00:05:12.069 --rc geninfo_all_blocks=1 00:05:12.069 --rc geninfo_unexecuted_blocks=1 00:05:12.069 00:05:12.069 ' 00:05:12.069 09:19:36 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:12.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.069 --rc genhtml_branch_coverage=1 00:05:12.069 --rc genhtml_function_coverage=1 00:05:12.069 --rc genhtml_legend=1 00:05:12.069 --rc geninfo_all_blocks=1 00:05:12.069 --rc geninfo_unexecuted_blocks=1 00:05:12.069 00:05:12.069 ' 00:05:12.069 09:19:36 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:12.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.069 --rc genhtml_branch_coverage=1 00:05:12.069 --rc genhtml_function_coverage=1 00:05:12.069 --rc genhtml_legend=1 00:05:12.069 --rc geninfo_all_blocks=1 00:05:12.069 --rc geninfo_unexecuted_blocks=1 00:05:12.069 00:05:12.069 ' 00:05:12.069 09:19:36 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:12.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.069 --rc genhtml_branch_coverage=1 00:05:12.069 --rc genhtml_function_coverage=1 00:05:12.069 --rc genhtml_legend=1 00:05:12.069 --rc geninfo_all_blocks=1 00:05:12.069 --rc geninfo_unexecuted_blocks=1 00:05:12.069 00:05:12.069 ' 00:05:12.069 09:19:36 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:12.069 09:19:36 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57850 00:05:12.069 09:19:36 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57850 00:05:12.069 09:19:36 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:12.069 09:19:36 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 57850 ']' 00:05:12.069 09:19:36 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.069 09:19:36 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:12.069 09:19:36 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.069 09:19:36 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:12.069 09:19:36 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.328 [2024-10-16 09:19:36.482818] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:05:12.328 [2024-10-16 09:19:36.483096] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57850 ] 00:05:12.328 [2024-10-16 09:19:36.623742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.328 [2024-10-16 09:19:36.678961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.588 [2024-10-16 09:19:36.754312] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:13.154 09:19:37 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:13.154 09:19:37 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:13.154 09:19:37 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:13.444 09:19:37 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57850 00:05:13.444 09:19:37 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 57850 ']' 00:05:13.444 09:19:37 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 57850 00:05:13.444 09:19:37 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:13.444 09:19:37 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:13.444 09:19:37 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57850 00:05:13.702 killing process with pid 57850 00:05:13.702 09:19:37 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:13.702 09:19:37 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:13.702 09:19:37 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57850' 00:05:13.702 09:19:37 alias_rpc -- common/autotest_common.sh@969 -- # kill 57850 00:05:13.702 09:19:37 alias_rpc -- common/autotest_common.sh@974 -- # wait 57850 00:05:13.961 ************************************ 00:05:13.961 END TEST alias_rpc 00:05:13.961 ************************************ 00:05:13.961 00:05:13.961 real 0m1.986s 00:05:13.961 user 0m2.279s 00:05:13.961 sys 0m0.475s 00:05:13.961 09:19:38 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:13.961 09:19:38 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.961 09:19:38 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:13.961 09:19:38 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:13.961 09:19:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:13.961 09:19:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:13.961 09:19:38 -- common/autotest_common.sh@10 -- # set +x 00:05:13.961 ************************************ 00:05:13.961 START TEST spdkcli_tcp 00:05:13.961 ************************************ 00:05:13.961 09:19:38 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:13.961 * Looking for test storage... 00:05:13.961 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:13.961 09:19:38 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:13.961 09:19:38 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:05:13.961 09:19:38 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:14.220 09:19:38 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:14.220 09:19:38 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:14.220 09:19:38 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:14.220 09:19:38 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:14.220 09:19:38 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.220 09:19:38 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:14.220 09:19:38 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:14.220 09:19:38 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:14.220 09:19:38 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:14.220 09:19:38 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:14.220 09:19:38 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:14.220 09:19:38 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:14.220 09:19:38 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:14.220 09:19:38 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:14.220 09:19:38 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:14.220 09:19:38 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.220 09:19:38 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:14.220 09:19:38 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:14.220 09:19:38 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.220 09:19:38 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:14.220 09:19:38 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:14.220 09:19:38 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:14.220 09:19:38 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:14.220 09:19:38 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.220 09:19:38 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:14.220 09:19:38 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:14.220 09:19:38 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:14.220 09:19:38 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:14.220 09:19:38 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:14.220 09:19:38 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.220 09:19:38 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:14.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.220 --rc genhtml_branch_coverage=1 00:05:14.220 --rc genhtml_function_coverage=1 00:05:14.220 --rc genhtml_legend=1 00:05:14.220 --rc geninfo_all_blocks=1 00:05:14.220 --rc geninfo_unexecuted_blocks=1 00:05:14.220 00:05:14.220 ' 00:05:14.220 09:19:38 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:14.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.220 --rc genhtml_branch_coverage=1 00:05:14.220 --rc genhtml_function_coverage=1 00:05:14.220 --rc genhtml_legend=1 00:05:14.220 --rc geninfo_all_blocks=1 00:05:14.220 --rc geninfo_unexecuted_blocks=1 00:05:14.220 00:05:14.220 ' 00:05:14.220 09:19:38 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:14.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.220 --rc genhtml_branch_coverage=1 00:05:14.220 --rc genhtml_function_coverage=1 00:05:14.220 --rc genhtml_legend=1 00:05:14.220 --rc geninfo_all_blocks=1 00:05:14.220 --rc geninfo_unexecuted_blocks=1 00:05:14.220 00:05:14.220 ' 00:05:14.220 09:19:38 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:14.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.220 --rc genhtml_branch_coverage=1 00:05:14.220 --rc genhtml_function_coverage=1 00:05:14.220 --rc genhtml_legend=1 00:05:14.220 --rc geninfo_all_blocks=1 00:05:14.220 --rc geninfo_unexecuted_blocks=1 00:05:14.220 00:05:14.220 ' 00:05:14.220 09:19:38 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:14.220 09:19:38 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:14.220 09:19:38 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:14.220 09:19:38 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:14.220 09:19:38 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:14.220 09:19:38 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:14.220 09:19:38 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:14.220 09:19:38 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:14.220 09:19:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:14.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.220 09:19:38 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57939 00:05:14.220 09:19:38 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57939 00:05:14.220 09:19:38 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 57939 ']' 00:05:14.220 09:19:38 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:14.220 09:19:38 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.220 09:19:38 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:14.220 09:19:38 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.220 09:19:38 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:14.220 09:19:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:14.220 [2024-10-16 09:19:38.559239] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:05:14.220 [2024-10-16 09:19:38.559340] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57939 ] 00:05:14.479 [2024-10-16 09:19:38.696880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:14.479 [2024-10-16 09:19:38.771864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.479 [2024-10-16 09:19:38.771877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.479 [2024-10-16 09:19:38.853152] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:14.737 09:19:39 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:14.737 09:19:39 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:14.737 09:19:39 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57943 00:05:14.737 09:19:39 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:14.737 09:19:39 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:14.996 [ 00:05:14.996 "bdev_malloc_delete", 00:05:14.996 "bdev_malloc_create", 00:05:14.996 "bdev_null_resize", 00:05:14.996 "bdev_null_delete", 00:05:14.996 "bdev_null_create", 00:05:14.996 "bdev_nvme_cuse_unregister", 00:05:14.996 "bdev_nvme_cuse_register", 00:05:14.996 "bdev_opal_new_user", 00:05:14.996 "bdev_opal_set_lock_state", 00:05:14.996 "bdev_opal_delete", 00:05:14.996 "bdev_opal_get_info", 00:05:14.996 "bdev_opal_create", 00:05:14.996 "bdev_nvme_opal_revert", 00:05:14.996 "bdev_nvme_opal_init", 00:05:14.996 "bdev_nvme_send_cmd", 00:05:14.996 "bdev_nvme_set_keys", 00:05:14.996 "bdev_nvme_get_path_iostat", 00:05:14.996 "bdev_nvme_get_mdns_discovery_info", 00:05:14.996 "bdev_nvme_stop_mdns_discovery", 00:05:14.996 "bdev_nvme_start_mdns_discovery", 00:05:14.996 "bdev_nvme_set_multipath_policy", 00:05:14.996 "bdev_nvme_set_preferred_path", 00:05:14.996 "bdev_nvme_get_io_paths", 00:05:14.996 "bdev_nvme_remove_error_injection", 00:05:14.996 "bdev_nvme_add_error_injection", 00:05:14.996 "bdev_nvme_get_discovery_info", 00:05:14.996 "bdev_nvme_stop_discovery", 00:05:14.996 "bdev_nvme_start_discovery", 00:05:14.996 "bdev_nvme_get_controller_health_info", 00:05:14.996 "bdev_nvme_disable_controller", 00:05:14.996 "bdev_nvme_enable_controller", 00:05:14.996 "bdev_nvme_reset_controller", 00:05:14.996 "bdev_nvme_get_transport_statistics", 00:05:14.996 "bdev_nvme_apply_firmware", 00:05:14.996 "bdev_nvme_detach_controller", 00:05:14.996 "bdev_nvme_get_controllers", 00:05:14.996 "bdev_nvme_attach_controller", 00:05:14.996 "bdev_nvme_set_hotplug", 00:05:14.996 "bdev_nvme_set_options", 00:05:14.996 "bdev_passthru_delete", 00:05:14.996 "bdev_passthru_create", 00:05:14.996 "bdev_lvol_set_parent_bdev", 00:05:14.996 "bdev_lvol_set_parent", 00:05:14.996 "bdev_lvol_check_shallow_copy", 00:05:14.996 "bdev_lvol_start_shallow_copy", 00:05:14.996 "bdev_lvol_grow_lvstore", 00:05:14.996 "bdev_lvol_get_lvols", 00:05:14.996 "bdev_lvol_get_lvstores", 00:05:14.996 "bdev_lvol_delete", 00:05:14.996 "bdev_lvol_set_read_only", 00:05:14.996 "bdev_lvol_resize", 00:05:14.996 "bdev_lvol_decouple_parent", 00:05:14.996 "bdev_lvol_inflate", 00:05:14.996 "bdev_lvol_rename", 00:05:14.996 "bdev_lvol_clone_bdev", 00:05:14.996 "bdev_lvol_clone", 00:05:14.996 "bdev_lvol_snapshot", 00:05:14.996 "bdev_lvol_create", 00:05:14.996 "bdev_lvol_delete_lvstore", 00:05:14.996 "bdev_lvol_rename_lvstore", 00:05:14.996 "bdev_lvol_create_lvstore", 00:05:14.996 "bdev_raid_set_options", 00:05:14.996 "bdev_raid_remove_base_bdev", 00:05:14.996 "bdev_raid_add_base_bdev", 00:05:14.996 "bdev_raid_delete", 00:05:14.996 "bdev_raid_create", 00:05:14.996 "bdev_raid_get_bdevs", 00:05:14.996 "bdev_error_inject_error", 00:05:14.996 "bdev_error_delete", 00:05:14.996 "bdev_error_create", 00:05:14.996 "bdev_split_delete", 00:05:14.996 "bdev_split_create", 00:05:14.996 "bdev_delay_delete", 00:05:14.996 "bdev_delay_create", 00:05:14.996 "bdev_delay_update_latency", 00:05:14.996 "bdev_zone_block_delete", 00:05:14.996 "bdev_zone_block_create", 00:05:14.996 "blobfs_create", 00:05:14.997 "blobfs_detect", 00:05:14.997 "blobfs_set_cache_size", 00:05:14.997 "bdev_aio_delete", 00:05:14.997 "bdev_aio_rescan", 00:05:14.997 "bdev_aio_create", 00:05:14.997 "bdev_ftl_set_property", 00:05:14.997 "bdev_ftl_get_properties", 00:05:14.997 "bdev_ftl_get_stats", 00:05:14.997 "bdev_ftl_unmap", 00:05:14.997 "bdev_ftl_unload", 00:05:14.997 "bdev_ftl_delete", 00:05:14.997 "bdev_ftl_load", 00:05:14.997 "bdev_ftl_create", 00:05:14.997 "bdev_virtio_attach_controller", 00:05:14.997 "bdev_virtio_scsi_get_devices", 00:05:14.997 "bdev_virtio_detach_controller", 00:05:14.997 "bdev_virtio_blk_set_hotplug", 00:05:14.997 "bdev_iscsi_delete", 00:05:14.997 "bdev_iscsi_create", 00:05:14.997 "bdev_iscsi_set_options", 00:05:14.997 "bdev_uring_delete", 00:05:14.997 "bdev_uring_rescan", 00:05:14.997 "bdev_uring_create", 00:05:14.997 "accel_error_inject_error", 00:05:14.997 "ioat_scan_accel_module", 00:05:14.997 "dsa_scan_accel_module", 00:05:14.997 "iaa_scan_accel_module", 00:05:14.997 "keyring_file_remove_key", 00:05:14.997 "keyring_file_add_key", 00:05:14.997 "keyring_linux_set_options", 00:05:14.997 "fsdev_aio_delete", 00:05:14.997 "fsdev_aio_create", 00:05:14.997 "iscsi_get_histogram", 00:05:14.997 "iscsi_enable_histogram", 00:05:14.997 "iscsi_set_options", 00:05:14.997 "iscsi_get_auth_groups", 00:05:14.997 "iscsi_auth_group_remove_secret", 00:05:14.997 "iscsi_auth_group_add_secret", 00:05:14.997 "iscsi_delete_auth_group", 00:05:14.997 "iscsi_create_auth_group", 00:05:14.997 "iscsi_set_discovery_auth", 00:05:14.997 "iscsi_get_options", 00:05:14.997 "iscsi_target_node_request_logout", 00:05:14.997 "iscsi_target_node_set_redirect", 00:05:14.997 "iscsi_target_node_set_auth", 00:05:14.997 "iscsi_target_node_add_lun", 00:05:14.997 "iscsi_get_stats", 00:05:14.997 "iscsi_get_connections", 00:05:14.997 "iscsi_portal_group_set_auth", 00:05:14.997 "iscsi_start_portal_group", 00:05:14.997 "iscsi_delete_portal_group", 00:05:14.997 "iscsi_create_portal_group", 00:05:14.997 "iscsi_get_portal_groups", 00:05:14.997 "iscsi_delete_target_node", 00:05:14.997 "iscsi_target_node_remove_pg_ig_maps", 00:05:14.997 "iscsi_target_node_add_pg_ig_maps", 00:05:14.997 "iscsi_create_target_node", 00:05:14.997 "iscsi_get_target_nodes", 00:05:14.997 "iscsi_delete_initiator_group", 00:05:14.997 "iscsi_initiator_group_remove_initiators", 00:05:14.997 "iscsi_initiator_group_add_initiators", 00:05:14.997 "iscsi_create_initiator_group", 00:05:14.997 "iscsi_get_initiator_groups", 00:05:14.997 "nvmf_set_crdt", 00:05:14.997 "nvmf_set_config", 00:05:14.997 "nvmf_set_max_subsystems", 00:05:14.997 "nvmf_stop_mdns_prr", 00:05:14.997 "nvmf_publish_mdns_prr", 00:05:14.997 "nvmf_subsystem_get_listeners", 00:05:14.997 "nvmf_subsystem_get_qpairs", 00:05:14.997 "nvmf_subsystem_get_controllers", 00:05:14.997 "nvmf_get_stats", 00:05:14.997 "nvmf_get_transports", 00:05:14.997 "nvmf_create_transport", 00:05:14.997 "nvmf_get_targets", 00:05:14.997 "nvmf_delete_target", 00:05:14.997 "nvmf_create_target", 00:05:14.997 "nvmf_subsystem_allow_any_host", 00:05:14.997 "nvmf_subsystem_set_keys", 00:05:14.997 "nvmf_subsystem_remove_host", 00:05:14.997 "nvmf_subsystem_add_host", 00:05:14.997 "nvmf_ns_remove_host", 00:05:14.997 "nvmf_ns_add_host", 00:05:14.997 "nvmf_subsystem_remove_ns", 00:05:14.997 "nvmf_subsystem_set_ns_ana_group", 00:05:14.997 "nvmf_subsystem_add_ns", 00:05:14.997 "nvmf_subsystem_listener_set_ana_state", 00:05:14.997 "nvmf_discovery_get_referrals", 00:05:14.997 "nvmf_discovery_remove_referral", 00:05:14.997 "nvmf_discovery_add_referral", 00:05:14.997 "nvmf_subsystem_remove_listener", 00:05:14.997 "nvmf_subsystem_add_listener", 00:05:14.997 "nvmf_delete_subsystem", 00:05:14.997 "nvmf_create_subsystem", 00:05:14.997 "nvmf_get_subsystems", 00:05:14.997 "env_dpdk_get_mem_stats", 00:05:14.997 "nbd_get_disks", 00:05:14.997 "nbd_stop_disk", 00:05:14.997 "nbd_start_disk", 00:05:14.997 "ublk_recover_disk", 00:05:14.997 "ublk_get_disks", 00:05:14.997 "ublk_stop_disk", 00:05:14.997 "ublk_start_disk", 00:05:14.997 "ublk_destroy_target", 00:05:14.997 "ublk_create_target", 00:05:14.997 "virtio_blk_create_transport", 00:05:14.997 "virtio_blk_get_transports", 00:05:14.997 "vhost_controller_set_coalescing", 00:05:14.997 "vhost_get_controllers", 00:05:14.997 "vhost_delete_controller", 00:05:14.997 "vhost_create_blk_controller", 00:05:14.997 "vhost_scsi_controller_remove_target", 00:05:14.997 "vhost_scsi_controller_add_target", 00:05:14.997 "vhost_start_scsi_controller", 00:05:14.997 "vhost_create_scsi_controller", 00:05:14.997 "thread_set_cpumask", 00:05:14.997 "scheduler_set_options", 00:05:14.997 "framework_get_governor", 00:05:14.997 "framework_get_scheduler", 00:05:14.997 "framework_set_scheduler", 00:05:14.997 "framework_get_reactors", 00:05:14.997 "thread_get_io_channels", 00:05:14.997 "thread_get_pollers", 00:05:14.997 "thread_get_stats", 00:05:14.997 "framework_monitor_context_switch", 00:05:14.997 "spdk_kill_instance", 00:05:14.997 "log_enable_timestamps", 00:05:14.997 "log_get_flags", 00:05:14.997 "log_clear_flag", 00:05:14.997 "log_set_flag", 00:05:14.997 "log_get_level", 00:05:14.997 "log_set_level", 00:05:14.997 "log_get_print_level", 00:05:14.997 "log_set_print_level", 00:05:14.997 "framework_enable_cpumask_locks", 00:05:14.997 "framework_disable_cpumask_locks", 00:05:14.997 "framework_wait_init", 00:05:14.997 "framework_start_init", 00:05:14.997 "scsi_get_devices", 00:05:14.997 "bdev_get_histogram", 00:05:14.997 "bdev_enable_histogram", 00:05:14.997 "bdev_set_qos_limit", 00:05:14.997 "bdev_set_qd_sampling_period", 00:05:14.997 "bdev_get_bdevs", 00:05:14.997 "bdev_reset_iostat", 00:05:14.997 "bdev_get_iostat", 00:05:14.997 "bdev_examine", 00:05:14.997 "bdev_wait_for_examine", 00:05:14.997 "bdev_set_options", 00:05:14.997 "accel_get_stats", 00:05:14.997 "accel_set_options", 00:05:14.997 "accel_set_driver", 00:05:14.997 "accel_crypto_key_destroy", 00:05:14.997 "accel_crypto_keys_get", 00:05:14.997 "accel_crypto_key_create", 00:05:14.997 "accel_assign_opc", 00:05:14.997 "accel_get_module_info", 00:05:14.997 "accel_get_opc_assignments", 00:05:14.997 "vmd_rescan", 00:05:14.997 "vmd_remove_device", 00:05:14.997 "vmd_enable", 00:05:14.997 "sock_get_default_impl", 00:05:14.997 "sock_set_default_impl", 00:05:14.997 "sock_impl_set_options", 00:05:14.997 "sock_impl_get_options", 00:05:14.997 "iobuf_get_stats", 00:05:14.997 "iobuf_set_options", 00:05:14.997 "keyring_get_keys", 00:05:14.997 "framework_get_pci_devices", 00:05:14.997 "framework_get_config", 00:05:14.997 "framework_get_subsystems", 00:05:14.997 "fsdev_set_opts", 00:05:14.997 "fsdev_get_opts", 00:05:14.997 "trace_get_info", 00:05:14.997 "trace_get_tpoint_group_mask", 00:05:14.997 "trace_disable_tpoint_group", 00:05:14.997 "trace_enable_tpoint_group", 00:05:14.997 "trace_clear_tpoint_mask", 00:05:14.997 "trace_set_tpoint_mask", 00:05:14.997 "notify_get_notifications", 00:05:14.997 "notify_get_types", 00:05:14.997 "spdk_get_version", 00:05:14.997 "rpc_get_methods" 00:05:14.997 ] 00:05:14.997 09:19:39 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:14.997 09:19:39 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:14.997 09:19:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:14.997 09:19:39 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:14.997 09:19:39 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57939 00:05:14.997 09:19:39 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 57939 ']' 00:05:14.997 09:19:39 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 57939 00:05:14.997 09:19:39 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:15.256 09:19:39 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:15.256 09:19:39 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57939 00:05:15.256 killing process with pid 57939 00:05:15.256 09:19:39 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:15.256 09:19:39 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:15.256 09:19:39 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57939' 00:05:15.256 09:19:39 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 57939 00:05:15.256 09:19:39 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 57939 00:05:15.514 ************************************ 00:05:15.514 END TEST spdkcli_tcp 00:05:15.514 ************************************ 00:05:15.514 00:05:15.514 real 0m1.540s 00:05:15.514 user 0m2.610s 00:05:15.514 sys 0m0.461s 00:05:15.514 09:19:39 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:15.514 09:19:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:15.514 09:19:39 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:15.514 09:19:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:15.514 09:19:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:15.514 09:19:39 -- common/autotest_common.sh@10 -- # set +x 00:05:15.514 ************************************ 00:05:15.514 START TEST dpdk_mem_utility 00:05:15.514 ************************************ 00:05:15.514 09:19:39 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:15.774 * Looking for test storage... 00:05:15.774 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:15.774 09:19:39 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:15.774 09:19:39 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:05:15.774 09:19:39 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:15.774 09:19:40 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:15.774 09:19:40 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.774 09:19:40 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.774 09:19:40 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.774 09:19:40 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.774 09:19:40 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.774 09:19:40 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.774 09:19:40 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.774 09:19:40 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.774 09:19:40 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.774 09:19:40 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.774 09:19:40 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.774 09:19:40 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:15.774 09:19:40 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:15.774 09:19:40 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.774 09:19:40 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.774 09:19:40 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:15.774 09:19:40 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:15.774 09:19:40 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.774 09:19:40 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:15.774 09:19:40 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.774 09:19:40 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:15.774 09:19:40 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:15.774 09:19:40 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.774 09:19:40 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:15.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.774 09:19:40 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.774 09:19:40 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.774 09:19:40 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.774 09:19:40 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:15.774 09:19:40 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.774 09:19:40 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:15.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.774 --rc genhtml_branch_coverage=1 00:05:15.774 --rc genhtml_function_coverage=1 00:05:15.774 --rc genhtml_legend=1 00:05:15.774 --rc geninfo_all_blocks=1 00:05:15.774 --rc geninfo_unexecuted_blocks=1 00:05:15.774 00:05:15.774 ' 00:05:15.774 09:19:40 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:15.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.774 --rc genhtml_branch_coverage=1 00:05:15.774 --rc genhtml_function_coverage=1 00:05:15.774 --rc genhtml_legend=1 00:05:15.774 --rc geninfo_all_blocks=1 00:05:15.774 --rc geninfo_unexecuted_blocks=1 00:05:15.774 00:05:15.774 ' 00:05:15.774 09:19:40 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:15.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.774 --rc genhtml_branch_coverage=1 00:05:15.774 --rc genhtml_function_coverage=1 00:05:15.774 --rc genhtml_legend=1 00:05:15.774 --rc geninfo_all_blocks=1 00:05:15.774 --rc geninfo_unexecuted_blocks=1 00:05:15.774 00:05:15.774 ' 00:05:15.774 09:19:40 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:15.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.774 --rc genhtml_branch_coverage=1 00:05:15.774 --rc genhtml_function_coverage=1 00:05:15.774 --rc genhtml_legend=1 00:05:15.774 --rc geninfo_all_blocks=1 00:05:15.774 --rc geninfo_unexecuted_blocks=1 00:05:15.774 00:05:15.774 ' 00:05:15.774 09:19:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:15.774 09:19:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58025 00:05:15.774 09:19:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58025 00:05:15.774 09:19:40 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 58025 ']' 00:05:15.774 09:19:40 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.774 09:19:40 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:15.774 09:19:40 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.774 09:19:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:15.774 09:19:40 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:15.774 09:19:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:15.774 [2024-10-16 09:19:40.099861] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:05:15.774 [2024-10-16 09:19:40.099968] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58025 ] 00:05:16.033 [2024-10-16 09:19:40.237605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.033 [2024-10-16 09:19:40.308177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.033 [2024-10-16 09:19:40.389234] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:16.971 09:19:41 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:16.971 09:19:41 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:16.971 09:19:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:16.971 09:19:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:16.972 09:19:41 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:16.972 09:19:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:16.972 { 00:05:16.972 "filename": "/tmp/spdk_mem_dump.txt" 00:05:16.972 } 00:05:16.972 09:19:41 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:16.972 09:19:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:16.972 DPDK memory size 810.000000 MiB in 1 heap(s) 00:05:16.972 1 heaps totaling size 810.000000 MiB 00:05:16.972 size: 810.000000 MiB heap id: 0 00:05:16.972 end heaps---------- 00:05:16.972 9 mempools totaling size 595.772034 MiB 00:05:16.972 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:16.972 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:16.972 size: 92.545471 MiB name: bdev_io_58025 00:05:16.972 size: 50.003479 MiB name: msgpool_58025 00:05:16.972 size: 36.509338 MiB name: fsdev_io_58025 00:05:16.972 size: 21.763794 MiB name: PDU_Pool 00:05:16.972 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:16.972 size: 4.133484 MiB name: evtpool_58025 00:05:16.972 size: 0.026123 MiB name: Session_Pool 00:05:16.972 end mempools------- 00:05:16.972 6 memzones totaling size 4.142822 MiB 00:05:16.972 size: 1.000366 MiB name: RG_ring_0_58025 00:05:16.972 size: 1.000366 MiB name: RG_ring_1_58025 00:05:16.972 size: 1.000366 MiB name: RG_ring_4_58025 00:05:16.972 size: 1.000366 MiB name: RG_ring_5_58025 00:05:16.972 size: 0.125366 MiB name: RG_ring_2_58025 00:05:16.972 size: 0.015991 MiB name: RG_ring_3_58025 00:05:16.972 end memzones------- 00:05:16.972 09:19:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:16.972 heap id: 0 total size: 810.000000 MiB number of busy elements: 314 number of free elements: 15 00:05:16.972 list of free elements. size: 10.813049 MiB 00:05:16.972 element at address: 0x200018a00000 with size: 0.999878 MiB 00:05:16.972 element at address: 0x200018c00000 with size: 0.999878 MiB 00:05:16.972 element at address: 0x200031800000 with size: 0.994446 MiB 00:05:16.972 element at address: 0x200000400000 with size: 0.993958 MiB 00:05:16.972 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:16.972 element at address: 0x200012c00000 with size: 0.954285 MiB 00:05:16.972 element at address: 0x200018e00000 with size: 0.936584 MiB 00:05:16.972 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:16.972 element at address: 0x20001a600000 with size: 0.567505 MiB 00:05:16.972 element at address: 0x20000a600000 with size: 0.488892 MiB 00:05:16.972 element at address: 0x200000c00000 with size: 0.487000 MiB 00:05:16.972 element at address: 0x200019000000 with size: 0.485657 MiB 00:05:16.972 element at address: 0x200003e00000 with size: 0.480286 MiB 00:05:16.972 element at address: 0x200027a00000 with size: 0.395752 MiB 00:05:16.972 element at address: 0x200000800000 with size: 0.351746 MiB 00:05:16.972 list of standard malloc elements. size: 199.268066 MiB 00:05:16.972 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:16.972 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:16.972 element at address: 0x200018afff80 with size: 1.000122 MiB 00:05:16.972 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:05:16.972 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:16.972 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:16.972 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:05:16.972 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:16.972 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:05:16.972 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:16.972 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:16.972 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:05:16.972 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:05:16.972 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:05:16.972 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:05:16.972 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:05:16.972 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:05:16.972 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:05:16.972 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:05:16.972 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:05:16.972 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:05:16.972 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:05:16.972 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:05:16.972 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:05:16.972 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:05:16.972 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:05:16.972 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:05:16.972 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:05:16.972 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:05:16.972 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:05:16.972 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:05:16.972 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:05:16.972 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:05:16.972 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:05:16.972 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:05:16.972 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:05:16.972 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:16.972 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:16.972 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:05:16.972 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:16.972 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:16.972 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:05:16.972 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:05:16.972 element at address: 0x20000085e580 with size: 0.000183 MiB 00:05:16.972 element at address: 0x20000087e840 with size: 0.000183 MiB 00:05:16.972 element at address: 0x20000087e900 with size: 0.000183 MiB 00:05:16.972 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:05:16.972 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:05:16.972 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:05:16.972 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:05:16.972 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:05:16.972 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:05:16.972 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:05:16.972 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:05:16.972 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:05:16.972 element at address: 0x20000087f080 with size: 0.000183 MiB 00:05:16.972 element at address: 0x20000087f140 with size: 0.000183 MiB 00:05:16.972 element at address: 0x20000087f200 with size: 0.000183 MiB 00:05:16.972 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:05:16.972 element at address: 0x20000087f380 with size: 0.000183 MiB 00:05:16.972 element at address: 0x20000087f440 with size: 0.000183 MiB 00:05:16.972 element at address: 0x20000087f500 with size: 0.000183 MiB 00:05:16.972 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:16.972 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:16.972 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:16.972 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:16.972 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:05:16.972 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:05:16.972 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:05:16.972 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:05:16.972 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:05:16.972 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:05:16.972 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:05:16.972 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:05:16.972 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:05:16.972 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:05:16.972 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:05:16.972 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:05:16.972 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:05:16.972 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:05:16.972 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:05:16.972 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:05:16.972 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:05:16.972 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:05:16.972 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:05:16.972 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:05:16.972 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:05:16.972 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:05:16.972 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:05:16.972 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:05:16.972 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:05:16.972 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:05:16.972 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:05:16.972 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:05:16.972 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:05:16.972 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:05:16.972 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:05:16.972 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:05:16.972 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:05:16.972 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:05:16.972 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:05:16.972 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:05:16.972 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:05:16.972 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:05:16.972 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:05:16.972 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:05:16.972 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:05:16.972 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:16.973 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:05:16.973 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a691480 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a691540 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a691600 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a6916c0 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a691780 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a691840 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a691900 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a6919c0 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a691a80 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a691b40 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a691c00 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a691cc0 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a691d80 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a691e40 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a691f00 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a691fc0 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a692080 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a692140 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a692200 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a6922c0 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a692380 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a692440 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a692500 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a6925c0 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a692680 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a692740 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a692800 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a6928c0 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a692980 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a692a40 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a692b00 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a692bc0 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a692c80 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a692d40 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a692e00 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a692ec0 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a692f80 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a693040 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a693100 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a6931c0 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a693280 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a693340 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a693400 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a6934c0 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a693580 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a693640 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a693700 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a6937c0 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a693880 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a693940 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a693a00 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a693ac0 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a693b80 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a693c40 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a693d00 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a693dc0 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a693e80 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a693f40 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a694000 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a6940c0 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a694180 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a694240 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a694300 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a6943c0 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a694480 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a694540 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a694600 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a6946c0 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a694780 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a694840 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a694900 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a6949c0 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a694a80 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a694b40 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a694c00 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a694cc0 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a694d80 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a694e40 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a694f00 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a694fc0 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a695080 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a695140 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a695200 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a6952c0 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a695380 with size: 0.000183 MiB 00:05:16.973 element at address: 0x20001a695440 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200027a65500 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200027a655c0 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200027a6c1c0 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200027a6c3c0 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200027a6c480 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200027a6c540 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200027a6c600 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200027a6c6c0 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200027a6c780 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200027a6c840 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200027a6c900 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200027a6c9c0 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200027a6ca80 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200027a6cb40 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200027a6cc00 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200027a6ccc0 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200027a6cd80 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200027a6ce40 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200027a6cf00 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200027a6cfc0 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200027a6d080 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200027a6d140 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200027a6d200 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200027a6d2c0 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200027a6d380 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200027a6d440 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200027a6d500 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200027a6d5c0 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200027a6d680 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200027a6d740 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200027a6d800 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200027a6d8c0 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200027a6d980 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200027a6da40 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200027a6db00 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200027a6dbc0 with size: 0.000183 MiB 00:05:16.973 element at address: 0x200027a6dc80 with size: 0.000183 MiB 00:05:16.974 element at address: 0x200027a6dd40 with size: 0.000183 MiB 00:05:16.974 element at address: 0x200027a6de00 with size: 0.000183 MiB 00:05:16.974 element at address: 0x200027a6dec0 with size: 0.000183 MiB 00:05:16.974 element at address: 0x200027a6df80 with size: 0.000183 MiB 00:05:16.974 element at address: 0x200027a6e040 with size: 0.000183 MiB 00:05:16.974 element at address: 0x200027a6e100 with size: 0.000183 MiB 00:05:16.974 element at address: 0x200027a6e1c0 with size: 0.000183 MiB 00:05:16.974 element at address: 0x200027a6e280 with size: 0.000183 MiB 00:05:16.974 element at address: 0x200027a6e340 with size: 0.000183 MiB 00:05:16.974 element at address: 0x200027a6e400 with size: 0.000183 MiB 00:05:16.974 element at address: 0x200027a6e4c0 with size: 0.000183 MiB 00:05:16.974 element at address: 0x200027a6e580 with size: 0.000183 MiB 00:05:16.974 element at address: 0x200027a6e640 with size: 0.000183 MiB 00:05:16.974 element at address: 0x200027a6e700 with size: 0.000183 MiB 00:05:16.974 element at address: 0x200027a6e7c0 with size: 0.000183 MiB 00:05:16.974 element at address: 0x200027a6e880 with size: 0.000183 MiB 00:05:16.974 element at address: 0x200027a6e940 with size: 0.000183 MiB 00:05:16.974 element at address: 0x200027a6ea00 with size: 0.000183 MiB 00:05:16.974 element at address: 0x200027a6eac0 with size: 0.000183 MiB 00:05:16.974 element at address: 0x200027a6eb80 with size: 0.000183 MiB 00:05:16.974 element at address: 0x200027a6ec40 with size: 0.000183 MiB 00:05:16.974 element at address: 0x200027a6ed00 with size: 0.000183 MiB 00:05:16.974 element at address: 0x200027a6edc0 with size: 0.000183 MiB 00:05:16.974 element at address: 0x200027a6ee80 with size: 0.000183 MiB 00:05:16.974 element at address: 0x200027a6ef40 with size: 0.000183 MiB 00:05:16.974 element at address: 0x200027a6f000 with size: 0.000183 MiB 00:05:16.974 element at address: 0x200027a6f0c0 with size: 0.000183 MiB 00:05:16.974 element at address: 0x200027a6f180 with size: 0.000183 MiB 00:05:16.974 element at address: 0x200027a6f240 with size: 0.000183 MiB 00:05:16.974 element at address: 0x200027a6f300 with size: 0.000183 MiB 00:05:16.974 element at address: 0x200027a6f3c0 with size: 0.000183 MiB 00:05:16.974 element at address: 0x200027a6f480 with size: 0.000183 MiB 00:05:16.974 element at address: 0x200027a6f540 with size: 0.000183 MiB 00:05:16.974 element at address: 0x200027a6f600 with size: 0.000183 MiB 00:05:16.974 element at address: 0x200027a6f6c0 with size: 0.000183 MiB 00:05:16.974 element at address: 0x200027a6f780 with size: 0.000183 MiB 00:05:16.974 element at address: 0x200027a6f840 with size: 0.000183 MiB 00:05:16.974 element at address: 0x200027a6f900 with size: 0.000183 MiB 00:05:16.974 element at address: 0x200027a6f9c0 with size: 0.000183 MiB 00:05:16.974 element at address: 0x200027a6fa80 with size: 0.000183 MiB 00:05:16.974 element at address: 0x200027a6fb40 with size: 0.000183 MiB 00:05:16.974 element at address: 0x200027a6fc00 with size: 0.000183 MiB 00:05:16.974 element at address: 0x200027a6fcc0 with size: 0.000183 MiB 00:05:16.974 element at address: 0x200027a6fd80 with size: 0.000183 MiB 00:05:16.974 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:05:16.974 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:05:16.974 list of memzone associated elements. size: 599.918884 MiB 00:05:16.974 element at address: 0x20001a695500 with size: 211.416748 MiB 00:05:16.974 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:16.974 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:05:16.974 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:16.974 element at address: 0x200012df4780 with size: 92.045044 MiB 00:05:16.974 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58025_0 00:05:16.974 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:16.974 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58025_0 00:05:16.974 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:16.974 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58025_0 00:05:16.974 element at address: 0x2000191be940 with size: 20.255554 MiB 00:05:16.974 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:16.974 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:05:16.974 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:16.974 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:16.974 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58025_0 00:05:16.974 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:16.974 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58025 00:05:16.974 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:16.974 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58025 00:05:16.974 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:16.974 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:16.974 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:05:16.974 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:16.974 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:16.974 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:16.974 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:16.974 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:16.974 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:16.974 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58025 00:05:16.974 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:16.974 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58025 00:05:16.974 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:05:16.974 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58025 00:05:16.974 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:05:16.974 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58025 00:05:16.974 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:16.974 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58025 00:05:16.974 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:16.974 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58025 00:05:16.974 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:16.974 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:16.974 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:16.974 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:16.974 element at address: 0x20001907c540 with size: 0.250488 MiB 00:05:16.974 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:16.974 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:16.974 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58025 00:05:16.974 element at address: 0x20000085e640 with size: 0.125488 MiB 00:05:16.974 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58025 00:05:16.974 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:16.974 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:16.974 element at address: 0x200027a65680 with size: 0.023743 MiB 00:05:16.974 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:16.974 element at address: 0x20000085a380 with size: 0.016113 MiB 00:05:16.974 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58025 00:05:16.974 element at address: 0x200027a6b7c0 with size: 0.002441 MiB 00:05:16.974 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:16.974 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:05:16.974 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58025 00:05:16.974 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:16.974 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58025 00:05:16.974 element at address: 0x20000085a180 with size: 0.000305 MiB 00:05:16.974 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58025 00:05:16.974 element at address: 0x200027a6c280 with size: 0.000305 MiB 00:05:16.974 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:16.974 09:19:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:16.974 09:19:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58025 00:05:16.974 09:19:41 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 58025 ']' 00:05:16.974 09:19:41 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 58025 00:05:16.974 09:19:41 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:16.974 09:19:41 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:16.974 09:19:41 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58025 00:05:16.974 killing process with pid 58025 00:05:16.974 09:19:41 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:16.974 09:19:41 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:16.974 09:19:41 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58025' 00:05:16.974 09:19:41 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 58025 00:05:16.974 09:19:41 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 58025 00:05:17.546 00:05:17.546 real 0m1.880s 00:05:17.546 user 0m2.103s 00:05:17.546 sys 0m0.463s 00:05:17.546 ************************************ 00:05:17.546 END TEST dpdk_mem_utility 00:05:17.546 ************************************ 00:05:17.546 09:19:41 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:17.546 09:19:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:17.546 09:19:41 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:17.546 09:19:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:17.546 09:19:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:17.546 09:19:41 -- common/autotest_common.sh@10 -- # set +x 00:05:17.546 ************************************ 00:05:17.546 START TEST event 00:05:17.546 ************************************ 00:05:17.546 09:19:41 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:17.546 * Looking for test storage... 00:05:17.546 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:17.546 09:19:41 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:17.546 09:19:41 event -- common/autotest_common.sh@1691 -- # lcov --version 00:05:17.546 09:19:41 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:17.805 09:19:41 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:17.805 09:19:41 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:17.805 09:19:41 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:17.805 09:19:41 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:17.805 09:19:41 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.805 09:19:41 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:17.805 09:19:41 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:17.805 09:19:41 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:17.805 09:19:41 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:17.805 09:19:41 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:17.805 09:19:41 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:17.805 09:19:41 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:17.805 09:19:41 event -- scripts/common.sh@344 -- # case "$op" in 00:05:17.806 09:19:41 event -- scripts/common.sh@345 -- # : 1 00:05:17.806 09:19:41 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:17.806 09:19:41 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.806 09:19:41 event -- scripts/common.sh@365 -- # decimal 1 00:05:17.806 09:19:41 event -- scripts/common.sh@353 -- # local d=1 00:05:17.806 09:19:41 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.806 09:19:41 event -- scripts/common.sh@355 -- # echo 1 00:05:17.806 09:19:41 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:17.806 09:19:41 event -- scripts/common.sh@366 -- # decimal 2 00:05:17.806 09:19:41 event -- scripts/common.sh@353 -- # local d=2 00:05:17.806 09:19:41 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.806 09:19:41 event -- scripts/common.sh@355 -- # echo 2 00:05:17.806 09:19:41 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:17.806 09:19:41 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:17.806 09:19:41 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:17.806 09:19:41 event -- scripts/common.sh@368 -- # return 0 00:05:17.806 09:19:41 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.806 09:19:41 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:17.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.806 --rc genhtml_branch_coverage=1 00:05:17.806 --rc genhtml_function_coverage=1 00:05:17.806 --rc genhtml_legend=1 00:05:17.806 --rc geninfo_all_blocks=1 00:05:17.806 --rc geninfo_unexecuted_blocks=1 00:05:17.806 00:05:17.806 ' 00:05:17.806 09:19:41 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:17.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.806 --rc genhtml_branch_coverage=1 00:05:17.806 --rc genhtml_function_coverage=1 00:05:17.806 --rc genhtml_legend=1 00:05:17.806 --rc geninfo_all_blocks=1 00:05:17.806 --rc geninfo_unexecuted_blocks=1 00:05:17.806 00:05:17.806 ' 00:05:17.806 09:19:41 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:17.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.806 --rc genhtml_branch_coverage=1 00:05:17.806 --rc genhtml_function_coverage=1 00:05:17.806 --rc genhtml_legend=1 00:05:17.806 --rc geninfo_all_blocks=1 00:05:17.806 --rc geninfo_unexecuted_blocks=1 00:05:17.806 00:05:17.806 ' 00:05:17.806 09:19:41 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:17.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.806 --rc genhtml_branch_coverage=1 00:05:17.806 --rc genhtml_function_coverage=1 00:05:17.806 --rc genhtml_legend=1 00:05:17.806 --rc geninfo_all_blocks=1 00:05:17.806 --rc geninfo_unexecuted_blocks=1 00:05:17.806 00:05:17.806 ' 00:05:17.806 09:19:41 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:17.806 09:19:41 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:17.806 09:19:41 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:17.806 09:19:41 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:17.806 09:19:41 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:17.806 09:19:41 event -- common/autotest_common.sh@10 -- # set +x 00:05:17.806 ************************************ 00:05:17.806 START TEST event_perf 00:05:17.806 ************************************ 00:05:17.806 09:19:41 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:17.806 Running I/O for 1 seconds...[2024-10-16 09:19:42.007490] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:05:17.806 [2024-10-16 09:19:42.007587] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58110 ] 00:05:17.806 [2024-10-16 09:19:42.145478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:17.806 [2024-10-16 09:19:42.203864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.806 [2024-10-16 09:19:42.203979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:17.806 [2024-10-16 09:19:42.204111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:17.806 [2024-10-16 09:19:42.204117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.181 Running I/O for 1 seconds... 00:05:19.181 lcore 0: 189546 00:05:19.181 lcore 1: 189543 00:05:19.181 lcore 2: 189543 00:05:19.181 lcore 3: 189544 00:05:19.181 done. 00:05:19.181 00:05:19.181 real 0m1.262s 00:05:19.181 user 0m4.093s 00:05:19.181 sys 0m0.048s 00:05:19.181 09:19:43 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:19.181 09:19:43 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:19.181 ************************************ 00:05:19.181 END TEST event_perf 00:05:19.181 ************************************ 00:05:19.181 09:19:43 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:19.181 09:19:43 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:19.181 09:19:43 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.181 09:19:43 event -- common/autotest_common.sh@10 -- # set +x 00:05:19.181 ************************************ 00:05:19.181 START TEST event_reactor 00:05:19.181 ************************************ 00:05:19.181 09:19:43 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:19.181 [2024-10-16 09:19:43.318743] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:05:19.181 [2024-10-16 09:19:43.318827] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58143 ] 00:05:19.181 [2024-10-16 09:19:43.455595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.181 [2024-10-16 09:19:43.499699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.558 test_start 00:05:20.558 oneshot 00:05:20.558 tick 100 00:05:20.558 tick 100 00:05:20.558 tick 250 00:05:20.558 tick 100 00:05:20.558 tick 100 00:05:20.558 tick 100 00:05:20.558 tick 250 00:05:20.558 tick 500 00:05:20.558 tick 100 00:05:20.558 tick 100 00:05:20.558 tick 250 00:05:20.558 tick 100 00:05:20.558 tick 100 00:05:20.558 test_end 00:05:20.558 00:05:20.558 real 0m1.241s 00:05:20.558 user 0m1.093s 00:05:20.558 sys 0m0.043s 00:05:20.558 ************************************ 00:05:20.558 END TEST event_reactor 00:05:20.558 ************************************ 00:05:20.558 09:19:44 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.558 09:19:44 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:20.558 09:19:44 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:20.558 09:19:44 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:20.558 09:19:44 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.558 09:19:44 event -- common/autotest_common.sh@10 -- # set +x 00:05:20.558 ************************************ 00:05:20.558 START TEST event_reactor_perf 00:05:20.558 ************************************ 00:05:20.558 09:19:44 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:20.558 [2024-10-16 09:19:44.612484] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:05:20.558 [2024-10-16 09:19:44.612704] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58178 ] 00:05:20.558 [2024-10-16 09:19:44.745375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.558 [2024-10-16 09:19:44.803137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.494 test_start 00:05:21.494 test_end 00:05:21.494 Performance: 366055 events per second 00:05:21.494 00:05:21.494 real 0m1.258s 00:05:21.494 user 0m1.106s 00:05:21.494 sys 0m0.046s 00:05:21.494 ************************************ 00:05:21.494 END TEST event_reactor_perf 00:05:21.494 ************************************ 00:05:21.494 09:19:45 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:21.494 09:19:45 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:21.494 09:19:45 event -- event/event.sh@49 -- # uname -s 00:05:21.753 09:19:45 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:21.753 09:19:45 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:21.753 09:19:45 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:21.753 09:19:45 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:21.753 09:19:45 event -- common/autotest_common.sh@10 -- # set +x 00:05:21.753 ************************************ 00:05:21.753 START TEST event_scheduler 00:05:21.753 ************************************ 00:05:21.753 09:19:45 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:21.753 * Looking for test storage... 00:05:21.753 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:21.753 09:19:45 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:21.753 09:19:46 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:05:21.753 09:19:46 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:21.753 09:19:46 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:21.753 09:19:46 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.753 09:19:46 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.753 09:19:46 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.753 09:19:46 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.753 09:19:46 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.753 09:19:46 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.753 09:19:46 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.754 09:19:46 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.754 09:19:46 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.754 09:19:46 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.754 09:19:46 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.754 09:19:46 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:21.754 09:19:46 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:21.754 09:19:46 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.754 09:19:46 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.754 09:19:46 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:21.754 09:19:46 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:21.754 09:19:46 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.754 09:19:46 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:21.754 09:19:46 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.754 09:19:46 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:21.754 09:19:46 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:21.754 09:19:46 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.754 09:19:46 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:21.754 09:19:46 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.754 09:19:46 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.754 09:19:46 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.754 09:19:46 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:21.754 09:19:46 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.754 09:19:46 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:21.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.754 --rc genhtml_branch_coverage=1 00:05:21.754 --rc genhtml_function_coverage=1 00:05:21.754 --rc genhtml_legend=1 00:05:21.754 --rc geninfo_all_blocks=1 00:05:21.754 --rc geninfo_unexecuted_blocks=1 00:05:21.754 00:05:21.754 ' 00:05:21.754 09:19:46 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:21.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.754 --rc genhtml_branch_coverage=1 00:05:21.754 --rc genhtml_function_coverage=1 00:05:21.754 --rc genhtml_legend=1 00:05:21.754 --rc geninfo_all_blocks=1 00:05:21.754 --rc geninfo_unexecuted_blocks=1 00:05:21.754 00:05:21.754 ' 00:05:21.754 09:19:46 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:21.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.754 --rc genhtml_branch_coverage=1 00:05:21.754 --rc genhtml_function_coverage=1 00:05:21.754 --rc genhtml_legend=1 00:05:21.754 --rc geninfo_all_blocks=1 00:05:21.754 --rc geninfo_unexecuted_blocks=1 00:05:21.754 00:05:21.754 ' 00:05:21.754 09:19:46 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:21.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.754 --rc genhtml_branch_coverage=1 00:05:21.754 --rc genhtml_function_coverage=1 00:05:21.754 --rc genhtml_legend=1 00:05:21.754 --rc geninfo_all_blocks=1 00:05:21.754 --rc geninfo_unexecuted_blocks=1 00:05:21.754 00:05:21.754 ' 00:05:21.754 09:19:46 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:21.754 09:19:46 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58248 00:05:21.754 09:19:46 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:21.754 09:19:46 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:21.754 09:19:46 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58248 00:05:21.754 09:19:46 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 58248 ']' 00:05:21.754 09:19:46 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.754 09:19:46 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:21.754 09:19:46 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.754 09:19:46 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:21.754 09:19:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:22.013 [2024-10-16 09:19:46.169624] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:05:22.013 [2024-10-16 09:19:46.169923] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58248 ] 00:05:22.013 [2024-10-16 09:19:46.309043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:22.013 [2024-10-16 09:19:46.380305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.013 [2024-10-16 09:19:46.380459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.013 [2024-10-16 09:19:46.380571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:22.013 [2024-10-16 09:19:46.380572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:22.271 09:19:46 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:22.272 09:19:46 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:22.272 09:19:46 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:22.272 09:19:46 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.272 09:19:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:22.272 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:22.272 POWER: Cannot set governor of lcore 0 to userspace 00:05:22.272 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:22.272 POWER: Cannot set governor of lcore 0 to performance 00:05:22.272 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:22.272 POWER: Cannot set governor of lcore 0 to userspace 00:05:22.272 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:22.272 POWER: Cannot set governor of lcore 0 to userspace 00:05:22.272 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:22.272 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:22.272 POWER: Unable to set Power Management Environment for lcore 0 00:05:22.272 [2024-10-16 09:19:46.447531] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:22.272 [2024-10-16 09:19:46.447670] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:22.272 [2024-10-16 09:19:46.447793] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:22.272 [2024-10-16 09:19:46.447908] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:22.272 [2024-10-16 09:19:46.447956] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:22.272 [2024-10-16 09:19:46.448066] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:22.272 09:19:46 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.272 09:19:46 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:22.272 09:19:46 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.272 09:19:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:22.272 [2024-10-16 09:19:46.515460] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:22.272 [2024-10-16 09:19:46.553147] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:22.272 09:19:46 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.272 09:19:46 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:22.272 09:19:46 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:22.272 09:19:46 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:22.272 09:19:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:22.272 ************************************ 00:05:22.272 START TEST scheduler_create_thread 00:05:22.272 ************************************ 00:05:22.272 09:19:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:22.272 09:19:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:22.272 09:19:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.272 09:19:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.272 2 00:05:22.272 09:19:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.272 09:19:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:22.272 09:19:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.272 09:19:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.272 3 00:05:22.272 09:19:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.272 09:19:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:22.272 09:19:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.272 09:19:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.272 4 00:05:22.272 09:19:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.272 09:19:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:22.272 09:19:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.272 09:19:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.272 5 00:05:22.272 09:19:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.272 09:19:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:22.272 09:19:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.272 09:19:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.272 6 00:05:22.272 09:19:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.272 09:19:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:22.272 09:19:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.272 09:19:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.272 7 00:05:22.272 09:19:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.272 09:19:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:22.272 09:19:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.272 09:19:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.272 8 00:05:22.272 09:19:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.272 09:19:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:22.272 09:19:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.272 09:19:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.272 9 00:05:22.272 09:19:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.272 09:19:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:22.272 09:19:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.272 09:19:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.272 10 00:05:22.272 09:19:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.272 09:19:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:22.272 09:19:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.272 09:19:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.272 09:19:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.272 09:19:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:22.272 09:19:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:22.272 09:19:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.272 09:19:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.840 09:19:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.840 09:19:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:22.840 09:19:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.840 09:19:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.215 09:19:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.215 09:19:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:24.215 09:19:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:24.215 09:19:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.215 09:19:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.619 ************************************ 00:05:25.619 END TEST scheduler_create_thread 00:05:25.619 ************************************ 00:05:25.619 09:19:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.619 00:05:25.619 real 0m3.095s 00:05:25.619 user 0m0.014s 00:05:25.619 sys 0m0.008s 00:05:25.619 09:19:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:25.619 09:19:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.619 09:19:49 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:25.619 09:19:49 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58248 00:05:25.619 09:19:49 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 58248 ']' 00:05:25.619 09:19:49 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 58248 00:05:25.619 09:19:49 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:25.619 09:19:49 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:25.619 09:19:49 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58248 00:05:25.619 killing process with pid 58248 00:05:25.619 09:19:49 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:25.619 09:19:49 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:25.620 09:19:49 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58248' 00:05:25.620 09:19:49 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 58248 00:05:25.620 09:19:49 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 58248 00:05:25.878 [2024-10-16 09:19:50.041810] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:25.878 00:05:25.878 real 0m4.354s 00:05:25.878 user 0m6.902s 00:05:25.878 sys 0m0.373s 00:05:25.878 09:19:50 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:25.878 ************************************ 00:05:25.878 END TEST event_scheduler 00:05:25.878 ************************************ 00:05:25.878 09:19:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:26.138 09:19:50 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:26.138 09:19:50 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:26.138 09:19:50 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:26.138 09:19:50 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.138 09:19:50 event -- common/autotest_common.sh@10 -- # set +x 00:05:26.138 ************************************ 00:05:26.138 START TEST app_repeat 00:05:26.138 ************************************ 00:05:26.138 09:19:50 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:26.138 09:19:50 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.138 09:19:50 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.138 09:19:50 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:26.138 09:19:50 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.138 09:19:50 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:26.138 09:19:50 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:26.138 09:19:50 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:26.138 09:19:50 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58340 00:05:26.138 09:19:50 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:26.138 Process app_repeat pid: 58340 00:05:26.138 09:19:50 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:26.138 09:19:50 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58340' 00:05:26.138 09:19:50 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:26.138 spdk_app_start Round 0 00:05:26.138 09:19:50 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:26.138 09:19:50 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58340 /var/tmp/spdk-nbd.sock 00:05:26.138 09:19:50 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58340 ']' 00:05:26.138 09:19:50 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:26.138 09:19:50 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:26.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:26.138 09:19:50 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:26.138 09:19:50 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:26.138 09:19:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:26.138 [2024-10-16 09:19:50.352390] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:05:26.138 [2024-10-16 09:19:50.352483] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58340 ] 00:05:26.138 [2024-10-16 09:19:50.489540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:26.138 [2024-10-16 09:19:50.537608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.138 [2024-10-16 09:19:50.537632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.397 [2024-10-16 09:19:50.594173] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:26.397 09:19:50 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:26.397 09:19:50 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:26.397 09:19:50 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:26.681 Malloc0 00:05:26.681 09:19:50 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:26.940 Malloc1 00:05:26.940 09:19:51 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:26.940 09:19:51 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.940 09:19:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.940 09:19:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:26.940 09:19:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.940 09:19:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:26.940 09:19:51 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:26.940 09:19:51 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.940 09:19:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.940 09:19:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:26.940 09:19:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.940 09:19:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:26.940 09:19:51 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:26.940 09:19:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:26.940 09:19:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.940 09:19:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:27.199 /dev/nbd0 00:05:27.199 09:19:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:27.199 09:19:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:27.199 09:19:51 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:27.199 09:19:51 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:27.199 09:19:51 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:27.199 09:19:51 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:27.199 09:19:51 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:27.199 09:19:51 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:27.199 09:19:51 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:27.199 09:19:51 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:27.199 09:19:51 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:27.199 1+0 records in 00:05:27.199 1+0 records out 00:05:27.199 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000320875 s, 12.8 MB/s 00:05:27.199 09:19:51 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:27.457 09:19:51 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:27.457 09:19:51 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:27.457 09:19:51 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:27.457 09:19:51 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:27.457 09:19:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:27.457 09:19:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.457 09:19:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:27.457 /dev/nbd1 00:05:27.716 09:19:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:27.716 09:19:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:27.716 09:19:51 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:27.716 09:19:51 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:27.716 09:19:51 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:27.717 09:19:51 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:27.717 09:19:51 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:27.717 09:19:51 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:27.717 09:19:51 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:27.717 09:19:51 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:27.717 09:19:51 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:27.717 1+0 records in 00:05:27.717 1+0 records out 00:05:27.717 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000216911 s, 18.9 MB/s 00:05:27.717 09:19:51 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:27.717 09:19:51 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:27.717 09:19:51 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:27.717 09:19:51 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:27.717 09:19:51 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:27.717 09:19:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:27.717 09:19:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.717 09:19:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:27.717 09:19:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.717 09:19:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:27.976 09:19:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:27.976 { 00:05:27.976 "nbd_device": "/dev/nbd0", 00:05:27.976 "bdev_name": "Malloc0" 00:05:27.976 }, 00:05:27.976 { 00:05:27.976 "nbd_device": "/dev/nbd1", 00:05:27.976 "bdev_name": "Malloc1" 00:05:27.976 } 00:05:27.976 ]' 00:05:27.976 09:19:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:27.976 09:19:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:27.976 { 00:05:27.976 "nbd_device": "/dev/nbd0", 00:05:27.976 "bdev_name": "Malloc0" 00:05:27.976 }, 00:05:27.976 { 00:05:27.976 "nbd_device": "/dev/nbd1", 00:05:27.976 "bdev_name": "Malloc1" 00:05:27.976 } 00:05:27.976 ]' 00:05:27.976 09:19:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:27.976 /dev/nbd1' 00:05:27.976 09:19:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:27.976 /dev/nbd1' 00:05:27.976 09:19:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:27.976 09:19:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:27.976 09:19:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:27.976 09:19:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:27.976 09:19:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:27.976 09:19:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:27.976 09:19:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.976 09:19:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:27.976 09:19:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:27.976 09:19:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:27.976 09:19:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:27.976 09:19:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:27.976 256+0 records in 00:05:27.976 256+0 records out 00:05:27.976 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00759454 s, 138 MB/s 00:05:27.976 09:19:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:27.976 09:19:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:27.976 256+0 records in 00:05:27.976 256+0 records out 00:05:27.976 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.026027 s, 40.3 MB/s 00:05:27.976 09:19:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:27.976 09:19:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:27.976 256+0 records in 00:05:27.976 256+0 records out 00:05:27.976 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0230129 s, 45.6 MB/s 00:05:27.976 09:19:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:27.976 09:19:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.976 09:19:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:27.976 09:19:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:27.976 09:19:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:27.976 09:19:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:27.976 09:19:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:27.976 09:19:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:27.976 09:19:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:27.976 09:19:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:27.976 09:19:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:27.976 09:19:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:27.976 09:19:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:27.976 09:19:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.976 09:19:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.976 09:19:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:27.976 09:19:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:27.976 09:19:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:27.976 09:19:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:28.544 09:19:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:28.544 09:19:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:28.544 09:19:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:28.544 09:19:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.544 09:19:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.544 09:19:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:28.544 09:19:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:28.544 09:19:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.544 09:19:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.544 09:19:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:28.803 09:19:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:28.803 09:19:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:28.803 09:19:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:28.803 09:19:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.803 09:19:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.803 09:19:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:28.803 09:19:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:28.803 09:19:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.803 09:19:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:28.803 09:19:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.803 09:19:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:29.062 09:19:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:29.062 09:19:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:29.062 09:19:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:29.062 09:19:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:29.062 09:19:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:29.062 09:19:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:29.062 09:19:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:29.062 09:19:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:29.062 09:19:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:29.062 09:19:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:29.062 09:19:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:29.062 09:19:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:29.062 09:19:53 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:29.321 09:19:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:29.580 [2024-10-16 09:19:53.779827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:29.580 [2024-10-16 09:19:53.818623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.580 [2024-10-16 09:19:53.818635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.580 [2024-10-16 09:19:53.874098] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:29.580 [2024-10-16 09:19:53.874215] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:29.580 [2024-10-16 09:19:53.874228] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:32.866 09:19:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:32.866 spdk_app_start Round 1 00:05:32.866 09:19:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:32.866 09:19:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58340 /var/tmp/spdk-nbd.sock 00:05:32.866 09:19:56 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58340 ']' 00:05:32.867 09:19:56 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:32.867 09:19:56 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:32.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:32.867 09:19:56 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:32.867 09:19:56 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:32.867 09:19:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:32.867 09:19:56 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:32.867 09:19:56 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:32.867 09:19:56 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:32.867 Malloc0 00:05:32.867 09:19:57 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:33.125 Malloc1 00:05:33.125 09:19:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:33.125 09:19:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.125 09:19:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:33.125 09:19:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:33.125 09:19:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.125 09:19:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:33.125 09:19:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:33.125 09:19:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.125 09:19:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:33.125 09:19:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:33.125 09:19:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.125 09:19:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:33.125 09:19:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:33.125 09:19:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:33.125 09:19:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.125 09:19:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:33.382 /dev/nbd0 00:05:33.382 09:19:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:33.382 09:19:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:33.382 09:19:57 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:33.382 09:19:57 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:33.382 09:19:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:33.382 09:19:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:33.382 09:19:57 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:33.382 09:19:57 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:33.383 09:19:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:33.383 09:19:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:33.383 09:19:57 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:33.383 1+0 records in 00:05:33.383 1+0 records out 00:05:33.383 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280392 s, 14.6 MB/s 00:05:33.383 09:19:57 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:33.383 09:19:57 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:33.383 09:19:57 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:33.383 09:19:57 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:33.383 09:19:57 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:33.383 09:19:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:33.383 09:19:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.383 09:19:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:33.642 /dev/nbd1 00:05:33.901 09:19:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:33.901 09:19:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:33.901 09:19:58 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:33.901 09:19:58 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:33.901 09:19:58 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:33.901 09:19:58 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:33.901 09:19:58 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:33.901 09:19:58 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:33.901 09:19:58 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:33.901 09:19:58 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:33.901 09:19:58 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:33.901 1+0 records in 00:05:33.901 1+0 records out 00:05:33.901 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000321451 s, 12.7 MB/s 00:05:33.901 09:19:58 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:33.901 09:19:58 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:33.901 09:19:58 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:33.901 09:19:58 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:33.901 09:19:58 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:33.901 09:19:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:33.901 09:19:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.901 09:19:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:33.901 09:19:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.901 09:19:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:34.160 09:19:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:34.160 { 00:05:34.160 "nbd_device": "/dev/nbd0", 00:05:34.160 "bdev_name": "Malloc0" 00:05:34.160 }, 00:05:34.160 { 00:05:34.160 "nbd_device": "/dev/nbd1", 00:05:34.160 "bdev_name": "Malloc1" 00:05:34.160 } 00:05:34.160 ]' 00:05:34.160 09:19:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:34.160 { 00:05:34.160 "nbd_device": "/dev/nbd0", 00:05:34.160 "bdev_name": "Malloc0" 00:05:34.160 }, 00:05:34.160 { 00:05:34.160 "nbd_device": "/dev/nbd1", 00:05:34.160 "bdev_name": "Malloc1" 00:05:34.160 } 00:05:34.160 ]' 00:05:34.160 09:19:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:34.160 09:19:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:34.160 /dev/nbd1' 00:05:34.160 09:19:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:34.160 /dev/nbd1' 00:05:34.160 09:19:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:34.160 09:19:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:34.160 09:19:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:34.160 09:19:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:34.160 09:19:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:34.160 09:19:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:34.160 09:19:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.160 09:19:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:34.160 09:19:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:34.160 09:19:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:34.160 09:19:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:34.160 09:19:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:34.160 256+0 records in 00:05:34.160 256+0 records out 00:05:34.160 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106753 s, 98.2 MB/s 00:05:34.160 09:19:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:34.160 09:19:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:34.160 256+0 records in 00:05:34.160 256+0 records out 00:05:34.160 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233199 s, 45.0 MB/s 00:05:34.160 09:19:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:34.160 09:19:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:34.160 256+0 records in 00:05:34.160 256+0 records out 00:05:34.160 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0250015 s, 41.9 MB/s 00:05:34.160 09:19:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:34.160 09:19:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.160 09:19:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:34.160 09:19:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:34.160 09:19:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:34.160 09:19:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:34.160 09:19:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:34.160 09:19:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:34.160 09:19:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:34.160 09:19:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:34.160 09:19:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:34.160 09:19:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:34.160 09:19:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:34.160 09:19:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.160 09:19:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.160 09:19:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:34.160 09:19:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:34.160 09:19:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:34.160 09:19:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:34.429 09:19:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:34.429 09:19:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:34.429 09:19:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:34.429 09:19:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:34.429 09:19:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:34.429 09:19:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:34.429 09:19:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:34.429 09:19:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:34.429 09:19:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:34.429 09:19:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:34.713 09:19:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:34.713 09:19:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:34.713 09:19:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:34.713 09:19:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:34.713 09:19:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:34.713 09:19:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:34.713 09:19:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:34.713 09:19:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:34.713 09:19:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:34.713 09:19:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.713 09:19:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:34.972 09:19:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:34.972 09:19:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:34.972 09:19:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:34.972 09:19:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:34.972 09:19:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:34.972 09:19:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:34.972 09:19:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:34.972 09:19:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:34.972 09:19:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:34.972 09:19:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:34.972 09:19:59 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:34.972 09:19:59 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:34.972 09:19:59 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:35.539 09:19:59 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:35.539 [2024-10-16 09:19:59.844681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:35.539 [2024-10-16 09:19:59.890382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.539 [2024-10-16 09:19:59.890395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.797 [2024-10-16 09:19:59.944718] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:35.797 [2024-10-16 09:19:59.944794] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:35.797 [2024-10-16 09:19:59.944807] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:38.326 spdk_app_start Round 2 00:05:38.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:38.326 09:20:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:38.326 09:20:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:38.326 09:20:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58340 /var/tmp/spdk-nbd.sock 00:05:38.326 09:20:02 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58340 ']' 00:05:38.326 09:20:02 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:38.326 09:20:02 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:38.326 09:20:02 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:38.326 09:20:02 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:38.326 09:20:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:38.626 09:20:02 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:38.626 09:20:02 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:38.626 09:20:02 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:38.903 Malloc0 00:05:38.903 09:20:03 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:39.162 Malloc1 00:05:39.162 09:20:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:39.162 09:20:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.162 09:20:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:39.162 09:20:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:39.162 09:20:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.162 09:20:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:39.162 09:20:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:39.162 09:20:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.162 09:20:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:39.162 09:20:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:39.162 09:20:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.162 09:20:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:39.162 09:20:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:39.162 09:20:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:39.162 09:20:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.162 09:20:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:39.421 /dev/nbd0 00:05:39.421 09:20:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:39.421 09:20:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:39.421 09:20:03 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:39.421 09:20:03 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:39.421 09:20:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:39.421 09:20:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:39.421 09:20:03 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:39.421 09:20:03 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:39.421 09:20:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:39.421 09:20:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:39.421 09:20:03 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:39.421 1+0 records in 00:05:39.421 1+0 records out 00:05:39.421 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000194757 s, 21.0 MB/s 00:05:39.421 09:20:03 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:39.421 09:20:03 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:39.421 09:20:03 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:39.421 09:20:03 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:39.421 09:20:03 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:39.421 09:20:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:39.421 09:20:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.421 09:20:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:39.680 /dev/nbd1 00:05:39.680 09:20:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:39.680 09:20:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:39.680 09:20:04 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:39.680 09:20:04 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:39.680 09:20:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:39.680 09:20:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:39.680 09:20:04 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:39.680 09:20:04 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:39.680 09:20:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:39.680 09:20:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:39.680 09:20:04 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:39.939 1+0 records in 00:05:39.939 1+0 records out 00:05:39.939 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293144 s, 14.0 MB/s 00:05:39.939 09:20:04 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:39.939 09:20:04 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:39.939 09:20:04 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:39.939 09:20:04 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:39.939 09:20:04 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:39.939 09:20:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:39.939 09:20:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.939 09:20:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:39.939 09:20:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.939 09:20:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:40.198 09:20:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:40.198 { 00:05:40.198 "nbd_device": "/dev/nbd0", 00:05:40.198 "bdev_name": "Malloc0" 00:05:40.198 }, 00:05:40.198 { 00:05:40.198 "nbd_device": "/dev/nbd1", 00:05:40.198 "bdev_name": "Malloc1" 00:05:40.198 } 00:05:40.198 ]' 00:05:40.198 09:20:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:40.198 { 00:05:40.198 "nbd_device": "/dev/nbd0", 00:05:40.198 "bdev_name": "Malloc0" 00:05:40.198 }, 00:05:40.198 { 00:05:40.198 "nbd_device": "/dev/nbd1", 00:05:40.198 "bdev_name": "Malloc1" 00:05:40.198 } 00:05:40.198 ]' 00:05:40.198 09:20:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:40.198 09:20:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:40.198 /dev/nbd1' 00:05:40.198 09:20:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:40.198 /dev/nbd1' 00:05:40.198 09:20:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:40.198 09:20:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:40.198 09:20:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:40.198 09:20:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:40.198 09:20:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:40.198 09:20:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:40.198 09:20:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.198 09:20:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:40.198 09:20:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:40.198 09:20:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:40.198 09:20:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:40.198 09:20:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:40.198 256+0 records in 00:05:40.198 256+0 records out 00:05:40.198 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105972 s, 98.9 MB/s 00:05:40.198 09:20:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:40.198 09:20:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:40.198 256+0 records in 00:05:40.198 256+0 records out 00:05:40.198 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0200468 s, 52.3 MB/s 00:05:40.198 09:20:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:40.198 09:20:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:40.198 256+0 records in 00:05:40.198 256+0 records out 00:05:40.198 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0222772 s, 47.1 MB/s 00:05:40.198 09:20:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:40.198 09:20:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.198 09:20:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:40.198 09:20:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:40.198 09:20:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:40.198 09:20:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:40.198 09:20:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:40.198 09:20:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:40.198 09:20:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:40.198 09:20:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:40.198 09:20:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:40.198 09:20:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:40.198 09:20:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:40.198 09:20:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.198 09:20:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.198 09:20:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:40.198 09:20:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:40.198 09:20:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:40.198 09:20:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:40.457 09:20:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:40.457 09:20:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:40.457 09:20:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:40.457 09:20:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:40.457 09:20:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:40.457 09:20:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:40.457 09:20:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:40.457 09:20:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:40.457 09:20:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:40.457 09:20:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:40.715 09:20:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:40.715 09:20:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:40.716 09:20:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:40.716 09:20:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:40.716 09:20:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:40.716 09:20:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:40.716 09:20:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:40.716 09:20:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:40.716 09:20:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:40.716 09:20:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.716 09:20:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:40.974 09:20:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:40.974 09:20:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:40.974 09:20:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.232 09:20:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:41.232 09:20:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:41.232 09:20:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.232 09:20:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:41.232 09:20:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:41.232 09:20:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:41.232 09:20:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:41.232 09:20:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:41.232 09:20:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:41.232 09:20:05 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:41.491 09:20:05 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:41.749 [2024-10-16 09:20:05.908213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:41.749 [2024-10-16 09:20:05.952683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.749 [2024-10-16 09:20:05.952696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.749 [2024-10-16 09:20:06.012100] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:41.749 [2024-10-16 09:20:06.012174] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:41.749 [2024-10-16 09:20:06.012187] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:45.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:45.035 09:20:08 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58340 /var/tmp/spdk-nbd.sock 00:05:45.035 09:20:08 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58340 ']' 00:05:45.035 09:20:08 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:45.035 09:20:08 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:45.035 09:20:08 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:45.035 09:20:08 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:45.035 09:20:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:45.035 09:20:09 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:45.035 09:20:09 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:45.035 09:20:09 event.app_repeat -- event/event.sh@39 -- # killprocess 58340 00:05:45.035 09:20:09 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 58340 ']' 00:05:45.035 09:20:09 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 58340 00:05:45.035 09:20:09 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:45.035 09:20:09 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:45.035 09:20:09 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58340 00:05:45.035 killing process with pid 58340 00:05:45.035 09:20:09 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:45.035 09:20:09 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:45.035 09:20:09 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58340' 00:05:45.035 09:20:09 event.app_repeat -- common/autotest_common.sh@969 -- # kill 58340 00:05:45.035 09:20:09 event.app_repeat -- common/autotest_common.sh@974 -- # wait 58340 00:05:45.035 spdk_app_start is called in Round 0. 00:05:45.035 Shutdown signal received, stop current app iteration 00:05:45.035 Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 reinitialization... 00:05:45.035 spdk_app_start is called in Round 1. 00:05:45.035 Shutdown signal received, stop current app iteration 00:05:45.035 Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 reinitialization... 00:05:45.035 spdk_app_start is called in Round 2. 00:05:45.035 Shutdown signal received, stop current app iteration 00:05:45.035 Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 reinitialization... 00:05:45.035 spdk_app_start is called in Round 3. 00:05:45.035 Shutdown signal received, stop current app iteration 00:05:45.035 ************************************ 00:05:45.035 END TEST app_repeat 00:05:45.035 ************************************ 00:05:45.035 09:20:09 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:45.035 09:20:09 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:45.035 00:05:45.035 real 0m18.934s 00:05:45.035 user 0m43.263s 00:05:45.035 sys 0m2.825s 00:05:45.035 09:20:09 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:45.035 09:20:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:45.035 09:20:09 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:45.035 09:20:09 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:45.035 09:20:09 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:45.035 09:20:09 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.035 09:20:09 event -- common/autotest_common.sh@10 -- # set +x 00:05:45.035 ************************************ 00:05:45.035 START TEST cpu_locks 00:05:45.035 ************************************ 00:05:45.035 09:20:09 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:45.035 * Looking for test storage... 00:05:45.035 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:45.035 09:20:09 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:45.035 09:20:09 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:05:45.035 09:20:09 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:45.294 09:20:09 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:45.294 09:20:09 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:45.294 09:20:09 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:45.294 09:20:09 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:45.294 09:20:09 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.294 09:20:09 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:45.294 09:20:09 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:45.294 09:20:09 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:45.294 09:20:09 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:45.294 09:20:09 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:45.294 09:20:09 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:45.294 09:20:09 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:45.294 09:20:09 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:45.294 09:20:09 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:45.294 09:20:09 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:45.294 09:20:09 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.294 09:20:09 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:45.294 09:20:09 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:45.294 09:20:09 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.294 09:20:09 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:45.294 09:20:09 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:45.294 09:20:09 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:45.294 09:20:09 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:45.294 09:20:09 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.294 09:20:09 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:45.294 09:20:09 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:45.294 09:20:09 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:45.294 09:20:09 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:45.294 09:20:09 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:45.294 09:20:09 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.294 09:20:09 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:45.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.294 --rc genhtml_branch_coverage=1 00:05:45.294 --rc genhtml_function_coverage=1 00:05:45.294 --rc genhtml_legend=1 00:05:45.294 --rc geninfo_all_blocks=1 00:05:45.294 --rc geninfo_unexecuted_blocks=1 00:05:45.294 00:05:45.294 ' 00:05:45.294 09:20:09 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:45.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.294 --rc genhtml_branch_coverage=1 00:05:45.294 --rc genhtml_function_coverage=1 00:05:45.294 --rc genhtml_legend=1 00:05:45.294 --rc geninfo_all_blocks=1 00:05:45.294 --rc geninfo_unexecuted_blocks=1 00:05:45.294 00:05:45.294 ' 00:05:45.294 09:20:09 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:45.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.294 --rc genhtml_branch_coverage=1 00:05:45.294 --rc genhtml_function_coverage=1 00:05:45.294 --rc genhtml_legend=1 00:05:45.294 --rc geninfo_all_blocks=1 00:05:45.294 --rc geninfo_unexecuted_blocks=1 00:05:45.294 00:05:45.294 ' 00:05:45.294 09:20:09 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:45.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.294 --rc genhtml_branch_coverage=1 00:05:45.294 --rc genhtml_function_coverage=1 00:05:45.294 --rc genhtml_legend=1 00:05:45.294 --rc geninfo_all_blocks=1 00:05:45.294 --rc geninfo_unexecuted_blocks=1 00:05:45.294 00:05:45.294 ' 00:05:45.294 09:20:09 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:45.294 09:20:09 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:45.294 09:20:09 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:45.294 09:20:09 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:45.294 09:20:09 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:45.294 09:20:09 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.294 09:20:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.294 ************************************ 00:05:45.294 START TEST default_locks 00:05:45.294 ************************************ 00:05:45.294 09:20:09 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:45.294 09:20:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58784 00:05:45.294 09:20:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58784 00:05:45.294 09:20:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:45.294 09:20:09 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 58784 ']' 00:05:45.294 09:20:09 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.294 09:20:09 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:45.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.294 09:20:09 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.294 09:20:09 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:45.294 09:20:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.294 [2024-10-16 09:20:09.562994] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:05:45.294 [2024-10-16 09:20:09.563098] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58784 ] 00:05:45.553 [2024-10-16 09:20:09.702592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.553 [2024-10-16 09:20:09.755258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.553 [2024-10-16 09:20:09.832491] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:45.811 09:20:10 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:45.811 09:20:10 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:45.811 09:20:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58784 00:05:45.811 09:20:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58784 00:05:45.811 09:20:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:46.378 09:20:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58784 00:05:46.378 09:20:10 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 58784 ']' 00:05:46.378 09:20:10 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 58784 00:05:46.378 09:20:10 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:46.378 09:20:10 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:46.378 09:20:10 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58784 00:05:46.378 09:20:10 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:46.378 09:20:10 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:46.378 killing process with pid 58784 00:05:46.378 09:20:10 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58784' 00:05:46.378 09:20:10 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 58784 00:05:46.378 09:20:10 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 58784 00:05:46.637 09:20:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58784 00:05:46.637 09:20:10 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:46.637 09:20:10 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58784 00:05:46.637 09:20:10 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:46.637 09:20:10 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:46.637 09:20:10 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:46.637 09:20:10 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:46.637 09:20:10 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 58784 00:05:46.637 09:20:10 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 58784 ']' 00:05:46.637 09:20:10 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.637 09:20:10 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:46.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.637 09:20:10 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.637 09:20:10 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:46.637 09:20:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.637 ERROR: process (pid: 58784) is no longer running 00:05:46.637 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (58784) - No such process 00:05:46.637 09:20:10 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:46.637 09:20:10 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:46.637 09:20:10 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:46.637 09:20:10 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:46.637 09:20:10 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:46.637 09:20:10 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:46.637 09:20:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:46.637 09:20:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:46.637 09:20:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:46.637 09:20:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:46.637 00:05:46.637 real 0m1.504s 00:05:46.637 user 0m1.428s 00:05:46.637 sys 0m0.594s 00:05:46.637 09:20:10 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.637 09:20:11 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.637 ************************************ 00:05:46.637 END TEST default_locks 00:05:46.637 ************************************ 00:05:46.896 09:20:11 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:46.896 09:20:11 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.896 09:20:11 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.896 09:20:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.896 ************************************ 00:05:46.896 START TEST default_locks_via_rpc 00:05:46.896 ************************************ 00:05:46.896 09:20:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:46.896 09:20:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58823 00:05:46.896 09:20:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58823 00:05:46.896 09:20:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:46.896 09:20:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 58823 ']' 00:05:46.896 09:20:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.896 09:20:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:46.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.896 09:20:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.896 09:20:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:46.896 09:20:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.896 [2024-10-16 09:20:11.130083] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:05:46.896 [2024-10-16 09:20:11.130210] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58823 ] 00:05:46.896 [2024-10-16 09:20:11.266113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.155 [2024-10-16 09:20:11.322083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.155 [2024-10-16 09:20:11.400647] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:47.414 09:20:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:47.414 09:20:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:47.414 09:20:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:47.414 09:20:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.414 09:20:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.414 09:20:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.414 09:20:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:47.414 09:20:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:47.414 09:20:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:47.414 09:20:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:47.414 09:20:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:47.414 09:20:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.414 09:20:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.414 09:20:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.414 09:20:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58823 00:05:47.414 09:20:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58823 00:05:47.414 09:20:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:47.673 09:20:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58823 00:05:47.673 09:20:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 58823 ']' 00:05:47.673 09:20:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 58823 00:05:47.673 09:20:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:47.673 09:20:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:47.673 09:20:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58823 00:05:47.931 09:20:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:47.931 09:20:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:47.931 killing process with pid 58823 00:05:47.931 09:20:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58823' 00:05:47.931 09:20:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 58823 00:05:47.931 09:20:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 58823 00:05:48.190 00:05:48.190 real 0m1.436s 00:05:48.190 user 0m1.385s 00:05:48.190 sys 0m0.578s 00:05:48.190 09:20:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.190 09:20:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.190 ************************************ 00:05:48.190 END TEST default_locks_via_rpc 00:05:48.190 ************************************ 00:05:48.190 09:20:12 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:48.190 09:20:12 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:48.190 09:20:12 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.190 09:20:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.190 ************************************ 00:05:48.190 START TEST non_locking_app_on_locked_coremask 00:05:48.190 ************************************ 00:05:48.190 09:20:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:48.190 09:20:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58872 00:05:48.190 09:20:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:48.190 09:20:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58872 /var/tmp/spdk.sock 00:05:48.190 09:20:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58872 ']' 00:05:48.190 09:20:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.190 09:20:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:48.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.190 09:20:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.190 09:20:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:48.190 09:20:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:48.449 [2024-10-16 09:20:12.616579] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:05:48.449 [2024-10-16 09:20:12.616697] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58872 ] 00:05:48.449 [2024-10-16 09:20:12.755783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.449 [2024-10-16 09:20:12.813439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.708 [2024-10-16 09:20:12.892029] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:48.708 09:20:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:48.708 09:20:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:48.708 09:20:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58875 00:05:48.708 09:20:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:48.708 09:20:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58875 /var/tmp/spdk2.sock 00:05:48.708 09:20:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58875 ']' 00:05:48.708 09:20:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:48.708 09:20:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:48.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:48.708 09:20:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:48.708 09:20:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:48.708 09:20:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:48.967 [2024-10-16 09:20:13.179462] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:05:48.967 [2024-10-16 09:20:13.179641] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58875 ] 00:05:48.967 [2024-10-16 09:20:13.326127] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:48.967 [2024-10-16 09:20:13.326176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.227 [2024-10-16 09:20:13.440844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.227 [2024-10-16 09:20:13.595673] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:50.164 09:20:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:50.164 09:20:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:50.164 09:20:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58872 00:05:50.164 09:20:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58872 00:05:50.164 09:20:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:50.732 09:20:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58872 00:05:50.732 09:20:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58872 ']' 00:05:50.732 09:20:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 58872 00:05:50.732 09:20:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:50.732 09:20:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:50.732 09:20:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58872 00:05:50.732 09:20:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:50.732 killing process with pid 58872 00:05:50.732 09:20:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:50.732 09:20:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58872' 00:05:50.732 09:20:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 58872 00:05:50.732 09:20:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 58872 00:05:51.670 09:20:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58875 00:05:51.670 09:20:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58875 ']' 00:05:51.670 09:20:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 58875 00:05:51.670 09:20:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:51.670 09:20:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:51.670 09:20:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58875 00:05:51.670 09:20:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:51.670 killing process with pid 58875 00:05:51.670 09:20:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:51.670 09:20:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58875' 00:05:51.670 09:20:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 58875 00:05:51.670 09:20:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 58875 00:05:52.238 00:05:52.238 real 0m3.866s 00:05:52.238 user 0m4.185s 00:05:52.238 sys 0m1.184s 00:05:52.238 ************************************ 00:05:52.238 END TEST non_locking_app_on_locked_coremask 00:05:52.238 09:20:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:52.238 09:20:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.238 ************************************ 00:05:52.238 09:20:16 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:52.238 09:20:16 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:52.238 09:20:16 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:52.238 09:20:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.238 ************************************ 00:05:52.238 START TEST locking_app_on_unlocked_coremask 00:05:52.238 ************************************ 00:05:52.238 09:20:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:52.238 09:20:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58952 00:05:52.238 09:20:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58952 /var/tmp/spdk.sock 00:05:52.238 09:20:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:52.238 09:20:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58952 ']' 00:05:52.238 09:20:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.238 09:20:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:52.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.238 09:20:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.238 09:20:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:52.238 09:20:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.238 [2024-10-16 09:20:16.538795] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:05:52.238 [2024-10-16 09:20:16.538944] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58952 ] 00:05:52.496 [2024-10-16 09:20:16.674637] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:52.496 [2024-10-16 09:20:16.674682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.496 [2024-10-16 09:20:16.731393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.496 [2024-10-16 09:20:16.810877] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:52.754 09:20:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:52.754 09:20:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:52.754 09:20:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58960 00:05:52.754 09:20:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58960 /var/tmp/spdk2.sock 00:05:52.754 09:20:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:52.754 09:20:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58960 ']' 00:05:52.754 09:20:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:52.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:52.754 09:20:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:52.754 09:20:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:52.754 09:20:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:52.754 09:20:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.754 [2024-10-16 09:20:17.107233] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:05:52.754 [2024-10-16 09:20:17.107337] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58960 ] 00:05:53.018 [2024-10-16 09:20:17.254962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.018 [2024-10-16 09:20:17.382450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.280 [2024-10-16 09:20:17.543217] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:53.848 09:20:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:53.848 09:20:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:53.848 09:20:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58960 00:05:53.848 09:20:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58960 00:05:53.848 09:20:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:54.812 09:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58952 00:05:54.812 09:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58952 ']' 00:05:54.812 09:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 58952 00:05:54.812 09:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:54.812 09:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:54.812 09:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58952 00:05:54.812 09:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:54.812 09:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:54.812 killing process with pid 58952 00:05:54.812 09:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58952' 00:05:54.812 09:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 58952 00:05:54.812 09:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 58952 00:05:55.749 09:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58960 00:05:55.749 09:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58960 ']' 00:05:55.749 09:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 58960 00:05:55.749 09:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:55.749 09:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:55.749 09:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58960 00:05:55.749 09:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:55.749 killing process with pid 58960 00:05:55.749 09:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:55.749 09:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58960' 00:05:55.750 09:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 58960 00:05:55.750 09:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 58960 00:05:56.008 00:05:56.008 real 0m3.933s 00:05:56.008 user 0m4.297s 00:05:56.008 sys 0m1.171s 00:05:56.008 09:20:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.008 ************************************ 00:05:56.008 END TEST locking_app_on_unlocked_coremask 00:05:56.008 ************************************ 00:05:56.008 09:20:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.267 09:20:20 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:56.267 09:20:20 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:56.267 09:20:20 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.267 09:20:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.267 ************************************ 00:05:56.267 START TEST locking_app_on_locked_coremask 00:05:56.267 ************************************ 00:05:56.267 09:20:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:05:56.267 09:20:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59032 00:05:56.267 09:20:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:56.267 09:20:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59032 /var/tmp/spdk.sock 00:05:56.267 09:20:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59032 ']' 00:05:56.267 09:20:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.268 09:20:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:56.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.268 09:20:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.268 09:20:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:56.268 09:20:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.268 [2024-10-16 09:20:20.515181] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:05:56.268 [2024-10-16 09:20:20.515265] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59032 ] 00:05:56.268 [2024-10-16 09:20:20.653358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.527 [2024-10-16 09:20:20.718284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.527 [2024-10-16 09:20:20.798498] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:56.786 09:20:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:56.786 09:20:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:56.786 09:20:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59041 00:05:56.786 09:20:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:56.786 09:20:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59041 /var/tmp/spdk2.sock 00:05:56.786 09:20:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:56.786 09:20:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59041 /var/tmp/spdk2.sock 00:05:56.786 09:20:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:56.786 09:20:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:56.786 09:20:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:56.786 09:20:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:56.786 09:20:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59041 /var/tmp/spdk2.sock 00:05:56.786 09:20:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59041 ']' 00:05:56.786 09:20:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:56.786 09:20:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:56.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:56.786 09:20:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:56.786 09:20:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:56.786 09:20:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.786 [2024-10-16 09:20:21.101152] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:05:56.786 [2024-10-16 09:20:21.101256] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59041 ] 00:05:57.045 [2024-10-16 09:20:21.245365] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59032 has claimed it. 00:05:57.045 [2024-10-16 09:20:21.245478] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:57.613 ERROR: process (pid: 59041) is no longer running 00:05:57.613 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59041) - No such process 00:05:57.613 09:20:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:57.613 09:20:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:57.613 09:20:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:57.613 09:20:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:57.613 09:20:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:57.613 09:20:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:57.613 09:20:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59032 00:05:57.613 09:20:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59032 00:05:57.613 09:20:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:58.181 09:20:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59032 00:05:58.181 09:20:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59032 ']' 00:05:58.181 09:20:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59032 00:05:58.181 09:20:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:58.181 09:20:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:58.181 09:20:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59032 00:05:58.181 09:20:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:58.181 09:20:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:58.181 killing process with pid 59032 00:05:58.181 09:20:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59032' 00:05:58.181 09:20:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59032 00:05:58.181 09:20:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59032 00:05:58.440 00:05:58.440 real 0m2.282s 00:05:58.440 user 0m2.577s 00:05:58.440 sys 0m0.631s 00:05:58.440 09:20:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.440 09:20:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.440 ************************************ 00:05:58.440 END TEST locking_app_on_locked_coremask 00:05:58.440 ************************************ 00:05:58.440 09:20:22 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:58.440 09:20:22 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:58.440 09:20:22 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.440 09:20:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.440 ************************************ 00:05:58.440 START TEST locking_overlapped_coremask 00:05:58.440 ************************************ 00:05:58.440 09:20:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:05:58.440 09:20:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59087 00:05:58.440 09:20:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59087 /var/tmp/spdk.sock 00:05:58.440 09:20:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:58.440 09:20:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59087 ']' 00:05:58.440 09:20:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.440 09:20:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:58.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.440 09:20:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.440 09:20:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:58.440 09:20:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.699 [2024-10-16 09:20:22.863106] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:05:58.699 [2024-10-16 09:20:22.863197] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59087 ] 00:05:58.699 [2024-10-16 09:20:23.001414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:58.699 [2024-10-16 09:20:23.074748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.699 [2024-10-16 09:20:23.074793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:58.699 [2024-10-16 09:20:23.074803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.957 [2024-10-16 09:20:23.154407] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:59.524 09:20:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:59.524 09:20:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:59.524 09:20:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:59.524 09:20:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59105 00:05:59.524 09:20:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59105 /var/tmp/spdk2.sock 00:05:59.524 09:20:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:59.524 09:20:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59105 /var/tmp/spdk2.sock 00:05:59.524 09:20:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:59.524 09:20:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:59.524 09:20:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:59.524 09:20:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:59.524 09:20:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59105 /var/tmp/spdk2.sock 00:05:59.524 09:20:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59105 ']' 00:05:59.524 09:20:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:59.524 09:20:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:59.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:59.524 09:20:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:59.524 09:20:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:59.524 09:20:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.782 [2024-10-16 09:20:23.952019] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:05:59.782 [2024-10-16 09:20:23.952850] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59105 ] 00:05:59.782 [2024-10-16 09:20:24.093176] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59087 has claimed it. 00:05:59.782 [2024-10-16 09:20:24.093266] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:00.349 ERROR: process (pid: 59105) is no longer running 00:06:00.349 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59105) - No such process 00:06:00.349 09:20:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:00.349 09:20:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:00.349 09:20:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:00.349 09:20:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:00.349 09:20:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:00.349 09:20:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:00.349 09:20:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:00.349 09:20:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:00.349 09:20:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:00.349 09:20:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:00.349 09:20:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59087 00:06:00.349 09:20:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 59087 ']' 00:06:00.349 09:20:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 59087 00:06:00.349 09:20:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:00.349 09:20:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:00.349 09:20:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59087 00:06:00.608 09:20:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:00.608 09:20:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:00.608 09:20:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59087' 00:06:00.608 killing process with pid 59087 00:06:00.608 09:20:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 59087 00:06:00.608 09:20:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 59087 00:06:00.866 00:06:00.866 real 0m2.387s 00:06:00.866 user 0m6.849s 00:06:00.866 sys 0m0.475s 00:06:00.866 09:20:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:00.866 09:20:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.866 ************************************ 00:06:00.866 END TEST locking_overlapped_coremask 00:06:00.866 ************************************ 00:06:00.866 09:20:25 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:00.866 09:20:25 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:00.866 09:20:25 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.866 09:20:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.866 ************************************ 00:06:00.866 START TEST locking_overlapped_coremask_via_rpc 00:06:00.866 ************************************ 00:06:00.866 09:20:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:00.866 09:20:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59150 00:06:00.866 09:20:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59150 /var/tmp/spdk.sock 00:06:00.866 09:20:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:00.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.866 09:20:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59150 ']' 00:06:00.866 09:20:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.866 09:20:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:00.866 09:20:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.866 09:20:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:00.866 09:20:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.125 [2024-10-16 09:20:25.294960] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:01.125 [2024-10-16 09:20:25.295078] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59150 ] 00:06:01.125 [2024-10-16 09:20:25.435253] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:01.125 [2024-10-16 09:20:25.435466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:01.125 [2024-10-16 09:20:25.493578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.125 [2024-10-16 09:20:25.493755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:01.125 [2024-10-16 09:20:25.493760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.384 [2024-10-16 09:20:25.570636] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:01.384 09:20:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:01.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:01.384 09:20:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:01.384 09:20:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59161 00:06:01.384 09:20:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59161 /var/tmp/spdk2.sock 00:06:01.384 09:20:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:01.384 09:20:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59161 ']' 00:06:01.384 09:20:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:01.384 09:20:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:01.384 09:20:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:01.384 09:20:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:01.384 09:20:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.642 [2024-10-16 09:20:25.854912] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:01.642 [2024-10-16 09:20:25.855425] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59161 ] 00:06:01.642 [2024-10-16 09:20:26.001381] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:01.642 [2024-10-16 09:20:26.001426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:01.901 [2024-10-16 09:20:26.132355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:01.901 [2024-10-16 09:20:26.135630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:01.901 [2024-10-16 09:20:26.135631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:01.901 [2024-10-16 09:20:26.277336] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:02.862 09:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:02.862 09:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:02.862 09:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:02.862 09:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.862 09:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.862 09:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.862 09:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:02.862 09:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:02.862 09:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:02.862 09:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:02.862 09:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.862 09:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:02.862 09:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.862 09:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:02.862 09:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.862 09:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.862 [2024-10-16 09:20:26.937707] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59150 has claimed it. 00:06:02.862 request: 00:06:02.862 { 00:06:02.862 "method": "framework_enable_cpumask_locks", 00:06:02.862 "req_id": 1 00:06:02.862 } 00:06:02.862 Got JSON-RPC error response 00:06:02.862 response: 00:06:02.862 { 00:06:02.862 "code": -32603, 00:06:02.862 "message": "Failed to claim CPU core: 2" 00:06:02.862 } 00:06:02.862 09:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:02.862 09:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:02.862 09:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:02.862 09:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:02.862 09:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:02.862 09:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59150 /var/tmp/spdk.sock 00:06:02.862 09:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59150 ']' 00:06:02.862 09:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.862 09:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:02.862 09:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.862 09:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:02.862 09:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.862 09:20:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:02.862 09:20:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:02.862 09:20:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59161 /var/tmp/spdk2.sock 00:06:02.862 09:20:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59161 ']' 00:06:02.862 09:20:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:02.862 09:20:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:02.863 09:20:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:02.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:02.863 09:20:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:02.863 09:20:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.430 09:20:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:03.430 09:20:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:03.430 ************************************ 00:06:03.430 END TEST locking_overlapped_coremask_via_rpc 00:06:03.430 ************************************ 00:06:03.430 09:20:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:03.430 09:20:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:03.430 09:20:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:03.430 09:20:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:03.430 00:06:03.430 real 0m2.334s 00:06:03.430 user 0m1.353s 00:06:03.430 sys 0m0.176s 00:06:03.430 09:20:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.430 09:20:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.430 09:20:27 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:03.430 09:20:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59150 ]] 00:06:03.430 09:20:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59150 00:06:03.430 09:20:27 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59150 ']' 00:06:03.430 09:20:27 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59150 00:06:03.430 09:20:27 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:03.430 09:20:27 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:03.430 09:20:27 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59150 00:06:03.430 killing process with pid 59150 00:06:03.430 09:20:27 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:03.430 09:20:27 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:03.430 09:20:27 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59150' 00:06:03.430 09:20:27 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59150 00:06:03.430 09:20:27 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59150 00:06:03.689 09:20:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59161 ]] 00:06:03.689 09:20:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59161 00:06:03.689 09:20:28 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59161 ']' 00:06:03.689 09:20:28 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59161 00:06:03.689 09:20:28 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:03.689 09:20:28 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:03.689 09:20:28 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59161 00:06:03.689 09:20:28 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:03.689 killing process with pid 59161 00:06:03.689 09:20:28 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:03.689 09:20:28 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59161' 00:06:03.689 09:20:28 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59161 00:06:03.689 09:20:28 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59161 00:06:04.258 09:20:28 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:04.258 09:20:28 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:04.258 09:20:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59150 ]] 00:06:04.258 09:20:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59150 00:06:04.258 09:20:28 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59150 ']' 00:06:04.258 09:20:28 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59150 00:06:04.258 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59150) - No such process 00:06:04.258 09:20:28 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59150 is not found' 00:06:04.258 Process with pid 59150 is not found 00:06:04.258 09:20:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59161 ]] 00:06:04.258 09:20:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59161 00:06:04.258 09:20:28 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59161 ']' 00:06:04.258 09:20:28 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59161 00:06:04.258 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59161) - No such process 00:06:04.258 Process with pid 59161 is not found 00:06:04.258 09:20:28 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59161 is not found' 00:06:04.258 09:20:28 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:04.258 00:06:04.258 real 0m19.244s 00:06:04.258 user 0m34.557s 00:06:04.258 sys 0m5.758s 00:06:04.258 09:20:28 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.258 09:20:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.258 ************************************ 00:06:04.258 END TEST cpu_locks 00:06:04.258 ************************************ 00:06:04.258 00:06:04.258 real 0m46.812s 00:06:04.258 user 1m31.231s 00:06:04.258 sys 0m9.364s 00:06:04.258 09:20:28 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.258 09:20:28 event -- common/autotest_common.sh@10 -- # set +x 00:06:04.258 ************************************ 00:06:04.258 END TEST event 00:06:04.258 ************************************ 00:06:04.258 09:20:28 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:04.258 09:20:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:04.258 09:20:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.258 09:20:28 -- common/autotest_common.sh@10 -- # set +x 00:06:04.258 ************************************ 00:06:04.258 START TEST thread 00:06:04.258 ************************************ 00:06:04.258 09:20:28 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:04.518 * Looking for test storage... 00:06:04.518 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:04.518 09:20:28 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:04.518 09:20:28 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:06:04.518 09:20:28 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:04.518 09:20:28 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:04.518 09:20:28 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.518 09:20:28 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.518 09:20:28 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.518 09:20:28 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.518 09:20:28 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.518 09:20:28 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.518 09:20:28 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.518 09:20:28 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.518 09:20:28 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.518 09:20:28 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.518 09:20:28 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.518 09:20:28 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:04.518 09:20:28 thread -- scripts/common.sh@345 -- # : 1 00:06:04.518 09:20:28 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.518 09:20:28 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.518 09:20:28 thread -- scripts/common.sh@365 -- # decimal 1 00:06:04.518 09:20:28 thread -- scripts/common.sh@353 -- # local d=1 00:06:04.518 09:20:28 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.518 09:20:28 thread -- scripts/common.sh@355 -- # echo 1 00:06:04.518 09:20:28 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.518 09:20:28 thread -- scripts/common.sh@366 -- # decimal 2 00:06:04.518 09:20:28 thread -- scripts/common.sh@353 -- # local d=2 00:06:04.518 09:20:28 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.518 09:20:28 thread -- scripts/common.sh@355 -- # echo 2 00:06:04.518 09:20:28 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.518 09:20:28 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.518 09:20:28 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.518 09:20:28 thread -- scripts/common.sh@368 -- # return 0 00:06:04.518 09:20:28 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.518 09:20:28 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:04.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.518 --rc genhtml_branch_coverage=1 00:06:04.518 --rc genhtml_function_coverage=1 00:06:04.518 --rc genhtml_legend=1 00:06:04.518 --rc geninfo_all_blocks=1 00:06:04.518 --rc geninfo_unexecuted_blocks=1 00:06:04.518 00:06:04.518 ' 00:06:04.518 09:20:28 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:04.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.518 --rc genhtml_branch_coverage=1 00:06:04.518 --rc genhtml_function_coverage=1 00:06:04.518 --rc genhtml_legend=1 00:06:04.518 --rc geninfo_all_blocks=1 00:06:04.518 --rc geninfo_unexecuted_blocks=1 00:06:04.518 00:06:04.518 ' 00:06:04.518 09:20:28 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:04.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.518 --rc genhtml_branch_coverage=1 00:06:04.518 --rc genhtml_function_coverage=1 00:06:04.518 --rc genhtml_legend=1 00:06:04.518 --rc geninfo_all_blocks=1 00:06:04.518 --rc geninfo_unexecuted_blocks=1 00:06:04.518 00:06:04.518 ' 00:06:04.518 09:20:28 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:04.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.518 --rc genhtml_branch_coverage=1 00:06:04.518 --rc genhtml_function_coverage=1 00:06:04.518 --rc genhtml_legend=1 00:06:04.518 --rc geninfo_all_blocks=1 00:06:04.518 --rc geninfo_unexecuted_blocks=1 00:06:04.518 00:06:04.518 ' 00:06:04.518 09:20:28 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:04.518 09:20:28 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:04.518 09:20:28 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.518 09:20:28 thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.518 ************************************ 00:06:04.518 START TEST thread_poller_perf 00:06:04.518 ************************************ 00:06:04.518 09:20:28 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:04.518 [2024-10-16 09:20:28.844303] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:04.518 [2024-10-16 09:20:28.845147] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59297 ] 00:06:04.777 [2024-10-16 09:20:28.984808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.777 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:04.777 [2024-10-16 09:20:29.045388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.711 [2024-10-16T09:20:30.115Z] ====================================== 00:06:05.711 [2024-10-16T09:20:30.115Z] busy:2212095163 (cyc) 00:06:05.711 [2024-10-16T09:20:30.115Z] total_run_count: 301000 00:06:05.711 [2024-10-16T09:20:30.115Z] tsc_hz: 2200000000 (cyc) 00:06:05.711 [2024-10-16T09:20:30.115Z] ====================================== 00:06:05.711 [2024-10-16T09:20:30.115Z] poller_cost: 7349 (cyc), 3340 (nsec) 00:06:05.711 00:06:05.711 real 0m1.273s 00:06:05.711 user 0m1.120s 00:06:05.711 sys 0m0.046s 00:06:05.711 09:20:30 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.711 09:20:30 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:05.711 ************************************ 00:06:05.711 END TEST thread_poller_perf 00:06:05.711 ************************************ 00:06:05.970 09:20:30 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:05.970 09:20:30 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:05.970 09:20:30 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.970 09:20:30 thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.970 ************************************ 00:06:05.970 START TEST thread_poller_perf 00:06:05.970 ************************************ 00:06:05.970 09:20:30 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:05.970 [2024-10-16 09:20:30.158594] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:05.970 [2024-10-16 09:20:30.158694] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59327 ] 00:06:05.970 [2024-10-16 09:20:30.291704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.970 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:05.970 [2024-10-16 09:20:30.344514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.388 [2024-10-16T09:20:31.792Z] ====================================== 00:06:07.388 [2024-10-16T09:20:31.792Z] busy:2202460747 (cyc) 00:06:07.388 [2024-10-16T09:20:31.792Z] total_run_count: 4115000 00:06:07.388 [2024-10-16T09:20:31.792Z] tsc_hz: 2200000000 (cyc) 00:06:07.388 [2024-10-16T09:20:31.792Z] ====================================== 00:06:07.388 [2024-10-16T09:20:31.792Z] poller_cost: 535 (cyc), 243 (nsec) 00:06:07.388 00:06:07.388 real 0m1.260s 00:06:07.388 user 0m1.103s 00:06:07.388 sys 0m0.048s 00:06:07.388 09:20:31 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.388 ************************************ 00:06:07.388 END TEST thread_poller_perf 00:06:07.388 ************************************ 00:06:07.388 09:20:31 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:07.388 09:20:31 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:07.388 00:06:07.388 real 0m2.795s 00:06:07.388 user 0m2.363s 00:06:07.388 sys 0m0.220s 00:06:07.388 09:20:31 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.388 09:20:31 thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.388 ************************************ 00:06:07.388 END TEST thread 00:06:07.388 ************************************ 00:06:07.388 09:20:31 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:07.388 09:20:31 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:07.388 09:20:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:07.388 09:20:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.388 09:20:31 -- common/autotest_common.sh@10 -- # set +x 00:06:07.388 ************************************ 00:06:07.388 START TEST app_cmdline 00:06:07.388 ************************************ 00:06:07.388 09:20:31 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:07.388 * Looking for test storage... 00:06:07.388 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:07.388 09:20:31 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:07.388 09:20:31 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:06:07.388 09:20:31 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:07.388 09:20:31 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:07.388 09:20:31 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:07.388 09:20:31 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:07.388 09:20:31 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:07.388 09:20:31 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.388 09:20:31 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:07.388 09:20:31 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:07.388 09:20:31 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:07.388 09:20:31 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:07.388 09:20:31 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:07.388 09:20:31 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:07.389 09:20:31 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:07.389 09:20:31 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:07.389 09:20:31 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:07.389 09:20:31 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:07.389 09:20:31 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.389 09:20:31 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:07.389 09:20:31 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:07.389 09:20:31 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.389 09:20:31 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:07.389 09:20:31 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:07.389 09:20:31 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:07.389 09:20:31 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:07.389 09:20:31 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:07.389 09:20:31 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:07.389 09:20:31 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:07.389 09:20:31 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:07.389 09:20:31 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:07.389 09:20:31 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:07.389 09:20:31 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:07.389 09:20:31 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:07.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.389 --rc genhtml_branch_coverage=1 00:06:07.389 --rc genhtml_function_coverage=1 00:06:07.389 --rc genhtml_legend=1 00:06:07.389 --rc geninfo_all_blocks=1 00:06:07.389 --rc geninfo_unexecuted_blocks=1 00:06:07.389 00:06:07.389 ' 00:06:07.389 09:20:31 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:07.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.389 --rc genhtml_branch_coverage=1 00:06:07.389 --rc genhtml_function_coverage=1 00:06:07.389 --rc genhtml_legend=1 00:06:07.389 --rc geninfo_all_blocks=1 00:06:07.389 --rc geninfo_unexecuted_blocks=1 00:06:07.389 00:06:07.389 ' 00:06:07.389 09:20:31 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:07.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.389 --rc genhtml_branch_coverage=1 00:06:07.389 --rc genhtml_function_coverage=1 00:06:07.389 --rc genhtml_legend=1 00:06:07.389 --rc geninfo_all_blocks=1 00:06:07.389 --rc geninfo_unexecuted_blocks=1 00:06:07.389 00:06:07.389 ' 00:06:07.389 09:20:31 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:07.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.389 --rc genhtml_branch_coverage=1 00:06:07.389 --rc genhtml_function_coverage=1 00:06:07.389 --rc genhtml_legend=1 00:06:07.389 --rc geninfo_all_blocks=1 00:06:07.389 --rc geninfo_unexecuted_blocks=1 00:06:07.389 00:06:07.389 ' 00:06:07.389 09:20:31 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:07.389 09:20:31 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59410 00:06:07.389 09:20:31 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59410 00:06:07.389 09:20:31 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:07.389 09:20:31 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 59410 ']' 00:06:07.389 09:20:31 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.389 09:20:31 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:07.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.389 09:20:31 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.389 09:20:31 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:07.389 09:20:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:07.389 [2024-10-16 09:20:31.753888] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:07.389 [2024-10-16 09:20:31.754001] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59410 ] 00:06:07.648 [2024-10-16 09:20:31.894424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.648 [2024-10-16 09:20:31.955919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.648 [2024-10-16 09:20:32.035014] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:07.906 09:20:32 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:07.906 09:20:32 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:07.906 09:20:32 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:08.164 { 00:06:08.164 "version": "SPDK v25.01-pre git sha1 27a8e04f9", 00:06:08.164 "fields": { 00:06:08.164 "major": 25, 00:06:08.164 "minor": 1, 00:06:08.165 "patch": 0, 00:06:08.165 "suffix": "-pre", 00:06:08.165 "commit": "27a8e04f9" 00:06:08.165 } 00:06:08.165 } 00:06:08.165 09:20:32 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:08.165 09:20:32 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:08.165 09:20:32 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:08.165 09:20:32 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:08.423 09:20:32 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:08.423 09:20:32 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:08.423 09:20:32 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:08.423 09:20:32 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.423 09:20:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:08.423 09:20:32 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.423 09:20:32 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:08.423 09:20:32 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:08.423 09:20:32 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:08.423 09:20:32 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:08.423 09:20:32 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:08.423 09:20:32 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:08.423 09:20:32 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.423 09:20:32 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:08.423 09:20:32 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.423 09:20:32 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:08.423 09:20:32 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.423 09:20:32 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:08.423 09:20:32 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:08.423 09:20:32 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:08.682 request: 00:06:08.682 { 00:06:08.682 "method": "env_dpdk_get_mem_stats", 00:06:08.682 "req_id": 1 00:06:08.682 } 00:06:08.682 Got JSON-RPC error response 00:06:08.682 response: 00:06:08.682 { 00:06:08.682 "code": -32601, 00:06:08.682 "message": "Method not found" 00:06:08.682 } 00:06:08.682 09:20:32 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:08.682 09:20:32 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:08.682 09:20:32 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:08.682 09:20:32 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:08.682 09:20:32 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59410 00:06:08.682 09:20:32 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 59410 ']' 00:06:08.682 09:20:32 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 59410 00:06:08.682 09:20:32 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:08.682 09:20:32 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:08.682 09:20:32 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59410 00:06:08.682 09:20:32 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:08.682 09:20:32 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:08.682 killing process with pid 59410 00:06:08.682 09:20:32 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59410' 00:06:08.682 09:20:32 app_cmdline -- common/autotest_common.sh@969 -- # kill 59410 00:06:08.682 09:20:32 app_cmdline -- common/autotest_common.sh@974 -- # wait 59410 00:06:09.249 ************************************ 00:06:09.250 END TEST app_cmdline 00:06:09.250 ************************************ 00:06:09.250 00:06:09.250 real 0m1.905s 00:06:09.250 user 0m2.324s 00:06:09.250 sys 0m0.494s 00:06:09.250 09:20:33 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.250 09:20:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:09.250 09:20:33 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:09.250 09:20:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.250 09:20:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.250 09:20:33 -- common/autotest_common.sh@10 -- # set +x 00:06:09.250 ************************************ 00:06:09.250 START TEST version 00:06:09.250 ************************************ 00:06:09.250 09:20:33 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:09.250 * Looking for test storage... 00:06:09.250 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:09.250 09:20:33 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:09.250 09:20:33 version -- common/autotest_common.sh@1691 -- # lcov --version 00:06:09.250 09:20:33 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:09.250 09:20:33 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:09.250 09:20:33 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:09.250 09:20:33 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:09.250 09:20:33 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:09.250 09:20:33 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.250 09:20:33 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:09.250 09:20:33 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:09.250 09:20:33 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:09.250 09:20:33 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:09.250 09:20:33 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:09.250 09:20:33 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:09.250 09:20:33 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:09.250 09:20:33 version -- scripts/common.sh@344 -- # case "$op" in 00:06:09.250 09:20:33 version -- scripts/common.sh@345 -- # : 1 00:06:09.250 09:20:33 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:09.250 09:20:33 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.250 09:20:33 version -- scripts/common.sh@365 -- # decimal 1 00:06:09.509 09:20:33 version -- scripts/common.sh@353 -- # local d=1 00:06:09.509 09:20:33 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.509 09:20:33 version -- scripts/common.sh@355 -- # echo 1 00:06:09.509 09:20:33 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:09.509 09:20:33 version -- scripts/common.sh@366 -- # decimal 2 00:06:09.509 09:20:33 version -- scripts/common.sh@353 -- # local d=2 00:06:09.509 09:20:33 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.509 09:20:33 version -- scripts/common.sh@355 -- # echo 2 00:06:09.509 09:20:33 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:09.509 09:20:33 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:09.509 09:20:33 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:09.509 09:20:33 version -- scripts/common.sh@368 -- # return 0 00:06:09.509 09:20:33 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.509 09:20:33 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:09.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.509 --rc genhtml_branch_coverage=1 00:06:09.509 --rc genhtml_function_coverage=1 00:06:09.509 --rc genhtml_legend=1 00:06:09.509 --rc geninfo_all_blocks=1 00:06:09.509 --rc geninfo_unexecuted_blocks=1 00:06:09.509 00:06:09.509 ' 00:06:09.509 09:20:33 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:09.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.509 --rc genhtml_branch_coverage=1 00:06:09.509 --rc genhtml_function_coverage=1 00:06:09.509 --rc genhtml_legend=1 00:06:09.509 --rc geninfo_all_blocks=1 00:06:09.509 --rc geninfo_unexecuted_blocks=1 00:06:09.509 00:06:09.509 ' 00:06:09.509 09:20:33 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:09.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.509 --rc genhtml_branch_coverage=1 00:06:09.509 --rc genhtml_function_coverage=1 00:06:09.509 --rc genhtml_legend=1 00:06:09.509 --rc geninfo_all_blocks=1 00:06:09.509 --rc geninfo_unexecuted_blocks=1 00:06:09.509 00:06:09.509 ' 00:06:09.509 09:20:33 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:09.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.509 --rc genhtml_branch_coverage=1 00:06:09.509 --rc genhtml_function_coverage=1 00:06:09.509 --rc genhtml_legend=1 00:06:09.509 --rc geninfo_all_blocks=1 00:06:09.509 --rc geninfo_unexecuted_blocks=1 00:06:09.509 00:06:09.509 ' 00:06:09.509 09:20:33 version -- app/version.sh@17 -- # get_header_version major 00:06:09.509 09:20:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:09.509 09:20:33 version -- app/version.sh@14 -- # cut -f2 00:06:09.509 09:20:33 version -- app/version.sh@14 -- # tr -d '"' 00:06:09.509 09:20:33 version -- app/version.sh@17 -- # major=25 00:06:09.509 09:20:33 version -- app/version.sh@18 -- # get_header_version minor 00:06:09.509 09:20:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:09.509 09:20:33 version -- app/version.sh@14 -- # cut -f2 00:06:09.509 09:20:33 version -- app/version.sh@14 -- # tr -d '"' 00:06:09.509 09:20:33 version -- app/version.sh@18 -- # minor=1 00:06:09.509 09:20:33 version -- app/version.sh@19 -- # get_header_version patch 00:06:09.509 09:20:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:09.509 09:20:33 version -- app/version.sh@14 -- # tr -d '"' 00:06:09.509 09:20:33 version -- app/version.sh@14 -- # cut -f2 00:06:09.509 09:20:33 version -- app/version.sh@19 -- # patch=0 00:06:09.509 09:20:33 version -- app/version.sh@20 -- # get_header_version suffix 00:06:09.509 09:20:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:09.509 09:20:33 version -- app/version.sh@14 -- # cut -f2 00:06:09.509 09:20:33 version -- app/version.sh@14 -- # tr -d '"' 00:06:09.509 09:20:33 version -- app/version.sh@20 -- # suffix=-pre 00:06:09.509 09:20:33 version -- app/version.sh@22 -- # version=25.1 00:06:09.509 09:20:33 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:09.509 09:20:33 version -- app/version.sh@28 -- # version=25.1rc0 00:06:09.509 09:20:33 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:09.509 09:20:33 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:09.509 09:20:33 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:09.509 09:20:33 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:09.509 00:06:09.509 real 0m0.285s 00:06:09.509 user 0m0.195s 00:06:09.509 sys 0m0.126s 00:06:09.509 09:20:33 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.509 09:20:33 version -- common/autotest_common.sh@10 -- # set +x 00:06:09.509 ************************************ 00:06:09.509 END TEST version 00:06:09.509 ************************************ 00:06:09.509 09:20:33 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:09.509 09:20:33 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:09.509 09:20:33 -- spdk/autotest.sh@194 -- # uname -s 00:06:09.509 09:20:33 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:09.509 09:20:33 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:09.509 09:20:33 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:06:09.509 09:20:33 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:06:09.509 09:20:33 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:09.509 09:20:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.509 09:20:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.509 09:20:33 -- common/autotest_common.sh@10 -- # set +x 00:06:09.509 ************************************ 00:06:09.509 START TEST spdk_dd 00:06:09.509 ************************************ 00:06:09.509 09:20:33 spdk_dd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:09.509 * Looking for test storage... 00:06:09.509 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:09.509 09:20:33 spdk_dd -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:09.509 09:20:33 spdk_dd -- common/autotest_common.sh@1691 -- # lcov --version 00:06:09.509 09:20:33 spdk_dd -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:09.769 09:20:33 spdk_dd -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:09.769 09:20:33 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:09.769 09:20:33 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:09.769 09:20:33 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:09.769 09:20:33 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.769 09:20:33 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:06:09.769 09:20:33 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:06:09.769 09:20:33 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:06:09.769 09:20:33 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:06:09.769 09:20:33 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:06:09.769 09:20:33 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:06:09.769 09:20:33 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:09.769 09:20:33 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:06:09.769 09:20:33 spdk_dd -- scripts/common.sh@345 -- # : 1 00:06:09.769 09:20:33 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:09.769 09:20:33 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.769 09:20:33 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:06:09.769 09:20:33 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:06:09.769 09:20:33 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.769 09:20:33 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:06:09.769 09:20:33 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:06:09.769 09:20:33 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:06:09.769 09:20:33 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:06:09.769 09:20:33 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.769 09:20:33 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:06:09.769 09:20:33 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:06:09.769 09:20:33 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:09.769 09:20:33 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:09.769 09:20:33 spdk_dd -- scripts/common.sh@368 -- # return 0 00:06:09.769 09:20:33 spdk_dd -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.769 09:20:33 spdk_dd -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:09.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.769 --rc genhtml_branch_coverage=1 00:06:09.769 --rc genhtml_function_coverage=1 00:06:09.769 --rc genhtml_legend=1 00:06:09.769 --rc geninfo_all_blocks=1 00:06:09.769 --rc geninfo_unexecuted_blocks=1 00:06:09.769 00:06:09.769 ' 00:06:09.769 09:20:33 spdk_dd -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:09.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.769 --rc genhtml_branch_coverage=1 00:06:09.769 --rc genhtml_function_coverage=1 00:06:09.769 --rc genhtml_legend=1 00:06:09.769 --rc geninfo_all_blocks=1 00:06:09.769 --rc geninfo_unexecuted_blocks=1 00:06:09.769 00:06:09.769 ' 00:06:09.769 09:20:33 spdk_dd -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:09.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.769 --rc genhtml_branch_coverage=1 00:06:09.769 --rc genhtml_function_coverage=1 00:06:09.769 --rc genhtml_legend=1 00:06:09.769 --rc geninfo_all_blocks=1 00:06:09.769 --rc geninfo_unexecuted_blocks=1 00:06:09.769 00:06:09.769 ' 00:06:09.769 09:20:33 spdk_dd -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:09.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.769 --rc genhtml_branch_coverage=1 00:06:09.769 --rc genhtml_function_coverage=1 00:06:09.769 --rc genhtml_legend=1 00:06:09.769 --rc geninfo_all_blocks=1 00:06:09.769 --rc geninfo_unexecuted_blocks=1 00:06:09.769 00:06:09.769 ' 00:06:09.769 09:20:33 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:09.769 09:20:33 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:06:09.769 09:20:33 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:09.769 09:20:33 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:09.769 09:20:33 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:09.769 09:20:33 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.769 09:20:33 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.769 09:20:33 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.769 09:20:33 spdk_dd -- paths/export.sh@5 -- # export PATH 00:06:09.769 09:20:33 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.769 09:20:33 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:10.029 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:10.029 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:10.029 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:10.029 09:20:34 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:06:10.029 09:20:34 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:06:10.029 09:20:34 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:06:10.029 09:20:34 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:06:10.029 09:20:34 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:06:10.029 09:20:34 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:06:10.029 09:20:34 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:06:10.029 09:20:34 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:06:10.029 09:20:34 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:06:10.029 09:20:34 spdk_dd -- scripts/common.sh@233 -- # local class 00:06:10.029 09:20:34 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:06:10.029 09:20:34 spdk_dd -- scripts/common.sh@235 -- # local progif 00:06:10.029 09:20:34 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:06:10.029 09:20:34 spdk_dd -- scripts/common.sh@236 -- # class=01 00:06:10.029 09:20:34 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:06:10.029 09:20:34 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:06:10.029 09:20:34 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:06:10.029 09:20:34 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:06:10.029 09:20:34 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:06:10.029 09:20:34 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:06:10.029 09:20:34 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:06:10.029 09:20:34 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:06:10.029 09:20:34 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:06:10.029 09:20:34 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:06:10.029 09:20:34 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:10.029 09:20:34 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:06:10.029 09:20:34 spdk_dd -- scripts/common.sh@18 -- # local i 00:06:10.029 09:20:34 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:06:10.029 09:20:34 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:06:10.029 09:20:34 spdk_dd -- scripts/common.sh@27 -- # return 0 00:06:10.029 09:20:34 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:06:10.029 09:20:34 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:10.029 09:20:34 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:06:10.029 09:20:34 spdk_dd -- scripts/common.sh@18 -- # local i 00:06:10.029 09:20:34 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:06:10.029 09:20:34 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:06:10.029 09:20:34 spdk_dd -- scripts/common.sh@27 -- # return 0 00:06:10.029 09:20:34 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:06:10.029 09:20:34 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:06:10.029 09:20:34 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:06:10.029 09:20:34 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:06:10.029 09:20:34 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:06:10.029 09:20:34 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:06:10.029 09:20:34 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:06:10.029 09:20:34 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:06:10.029 09:20:34 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:06:10.029 09:20:34 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:06:10.029 09:20:34 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:06:10.029 09:20:34 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:06:10.029 09:20:34 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:10.029 09:20:34 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:06:10.029 09:20:34 spdk_dd -- dd/common.sh@139 -- # local lib 00:06:10.029 09:20:34 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:06:10.029 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.029 09:20:34 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:10.029 09:20:34 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.14.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.1.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.2 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.0 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:06:10.338 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.339 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:06:10.339 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.339 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:06:10.339 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.339 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:06:10.339 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.339 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:06:10.339 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.339 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:06:10.339 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.339 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:06:10.339 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.339 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:06:10.339 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.339 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:06:10.339 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.339 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:06:10.339 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.339 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:06:10.339 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.339 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:06:10.339 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.339 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:06:10.339 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.339 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:06:10.339 09:20:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:10.339 09:20:34 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:06:10.339 09:20:34 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:06:10.339 * spdk_dd linked to liburing 00:06:10.339 09:20:34 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:10.339 09:20:34 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=n 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=y 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@75 -- # CONFIG_FC=n 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:06:10.339 09:20:34 spdk_dd -- common/build_config.sh@89 -- # CONFIG_URING=y 00:06:10.339 09:20:34 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:06:10.339 09:20:34 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:06:10.339 09:20:34 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:06:10.339 09:20:34 spdk_dd -- dd/common.sh@153 -- # return 0 00:06:10.339 09:20:34 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:06:10.339 09:20:34 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:10.339 09:20:34 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:10.339 09:20:34 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.339 09:20:34 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:10.339 ************************************ 00:06:10.339 START TEST spdk_dd_basic_rw 00:06:10.339 ************************************ 00:06:10.339 09:20:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:10.339 * Looking for test storage... 00:06:10.339 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:10.339 09:20:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:10.339 09:20:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1691 -- # lcov --version 00:06:10.339 09:20:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:10.339 09:20:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:10.339 09:20:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.339 09:20:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.339 09:20:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.339 09:20:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.339 09:20:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.339 09:20:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.339 09:20:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.339 09:20:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.339 09:20:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.340 09:20:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.340 09:20:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.340 09:20:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:06:10.340 09:20:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:06:10.340 09:20:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.340 09:20:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.340 09:20:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:06:10.340 09:20:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:06:10.340 09:20:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.340 09:20:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:06:10.340 09:20:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.340 09:20:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:06:10.340 09:20:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:06:10.340 09:20:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.340 09:20:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:06:10.340 09:20:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.340 09:20:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.340 09:20:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.340 09:20:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:06:10.340 09:20:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.340 09:20:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:10.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.340 --rc genhtml_branch_coverage=1 00:06:10.340 --rc genhtml_function_coverage=1 00:06:10.340 --rc genhtml_legend=1 00:06:10.340 --rc geninfo_all_blocks=1 00:06:10.340 --rc geninfo_unexecuted_blocks=1 00:06:10.340 00:06:10.340 ' 00:06:10.340 09:20:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:10.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.340 --rc genhtml_branch_coverage=1 00:06:10.340 --rc genhtml_function_coverage=1 00:06:10.340 --rc genhtml_legend=1 00:06:10.340 --rc geninfo_all_blocks=1 00:06:10.340 --rc geninfo_unexecuted_blocks=1 00:06:10.340 00:06:10.340 ' 00:06:10.340 09:20:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:10.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.340 --rc genhtml_branch_coverage=1 00:06:10.340 --rc genhtml_function_coverage=1 00:06:10.340 --rc genhtml_legend=1 00:06:10.340 --rc geninfo_all_blocks=1 00:06:10.340 --rc geninfo_unexecuted_blocks=1 00:06:10.340 00:06:10.340 ' 00:06:10.340 09:20:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:10.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.340 --rc genhtml_branch_coverage=1 00:06:10.340 --rc genhtml_function_coverage=1 00:06:10.340 --rc genhtml_legend=1 00:06:10.340 --rc geninfo_all_blocks=1 00:06:10.340 --rc geninfo_unexecuted_blocks=1 00:06:10.340 00:06:10.340 ' 00:06:10.340 09:20:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:10.340 09:20:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:06:10.340 09:20:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:10.340 09:20:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:10.340 09:20:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:10.340 09:20:34 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.340 09:20:34 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.340 09:20:34 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.340 09:20:34 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:06:10.340 09:20:34 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.340 09:20:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:06:10.340 09:20:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:06:10.340 09:20:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:06:10.340 09:20:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:06:10.340 09:20:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:06:10.340 09:20:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:10.340 09:20:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:10.340 09:20:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:10.340 09:20:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:10.340 09:20:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:06:10.340 09:20:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:06:10.340 09:20:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:06:10.340 09:20:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:06:10.602 09:20:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:06:10.602 09:20:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:06:10.603 09:20:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:06:10.603 09:20:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:06:10.603 09:20:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:06:10.603 09:20:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:06:10.603 09:20:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:06:10.603 09:20:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:10.603 09:20:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:06:10.603 09:20:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:10.603 09:20:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:10.603 09:20:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.603 09:20:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:10.603 09:20:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:10.603 ************************************ 00:06:10.603 START TEST dd_bs_lt_native_bs 00:06:10.603 ************************************ 00:06:10.603 09:20:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1125 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:10.603 09:20:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # local es=0 00:06:10.603 09:20:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:10.603 09:20:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:10.603 09:20:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.603 09:20:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:10.603 09:20:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.603 09:20:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:10.603 09:20:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.603 09:20:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:10.603 09:20:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:10.603 09:20:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:10.603 { 00:06:10.603 "subsystems": [ 00:06:10.603 { 00:06:10.603 "subsystem": "bdev", 00:06:10.603 "config": [ 00:06:10.603 { 00:06:10.603 "params": { 00:06:10.603 "trtype": "pcie", 00:06:10.603 "traddr": "0000:00:10.0", 00:06:10.603 "name": "Nvme0" 00:06:10.603 }, 00:06:10.603 "method": "bdev_nvme_attach_controller" 00:06:10.603 }, 00:06:10.603 { 00:06:10.603 "method": "bdev_wait_for_examine" 00:06:10.603 } 00:06:10.603 ] 00:06:10.603 } 00:06:10.603 ] 00:06:10.603 } 00:06:10.603 [2024-10-16 09:20:34.973338] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:10.603 [2024-10-16 09:20:34.973465] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59759 ] 00:06:10.862 [2024-10-16 09:20:35.110459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.862 [2024-10-16 09:20:35.168422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.862 [2024-10-16 09:20:35.225557] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:11.121 [2024-10-16 09:20:35.339477] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:06:11.121 [2024-10-16 09:20:35.339552] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:11.121 [2024-10-16 09:20:35.471116] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:11.380 09:20:35 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # es=234 00:06:11.380 09:20:35 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:11.380 09:20:35 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@662 -- # es=106 00:06:11.380 09:20:35 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # case "$es" in 00:06:11.380 09:20:35 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@670 -- # es=1 00:06:11.380 09:20:35 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:11.380 00:06:11.380 real 0m0.643s 00:06:11.380 user 0m0.443s 00:06:11.380 sys 0m0.174s 00:06:11.380 ************************************ 00:06:11.380 END TEST dd_bs_lt_native_bs 00:06:11.380 ************************************ 00:06:11.380 09:20:35 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.380 09:20:35 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:06:11.380 09:20:35 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:06:11.380 09:20:35 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:11.380 09:20:35 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.380 09:20:35 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:11.380 ************************************ 00:06:11.380 START TEST dd_rw 00:06:11.380 ************************************ 00:06:11.380 09:20:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1125 -- # basic_rw 4096 00:06:11.380 09:20:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:06:11.380 09:20:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:06:11.380 09:20:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:06:11.380 09:20:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:06:11.380 09:20:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:11.380 09:20:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:11.380 09:20:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:11.380 09:20:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:11.380 09:20:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:11.380 09:20:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:11.380 09:20:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:11.380 09:20:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:11.380 09:20:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:11.380 09:20:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:11.380 09:20:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:11.380 09:20:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:11.380 09:20:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:11.380 09:20:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:11.948 09:20:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:06:11.948 09:20:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:11.948 09:20:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:11.948 09:20:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:11.948 { 00:06:11.948 "subsystems": [ 00:06:11.948 { 00:06:11.948 "subsystem": "bdev", 00:06:11.948 "config": [ 00:06:11.948 { 00:06:11.948 "params": { 00:06:11.948 "trtype": "pcie", 00:06:11.948 "traddr": "0000:00:10.0", 00:06:11.948 "name": "Nvme0" 00:06:11.948 }, 00:06:11.948 "method": "bdev_nvme_attach_controller" 00:06:11.948 }, 00:06:11.948 { 00:06:11.948 "method": "bdev_wait_for_examine" 00:06:11.948 } 00:06:11.948 ] 00:06:11.948 } 00:06:11.948 ] 00:06:11.948 } 00:06:11.948 [2024-10-16 09:20:36.232896] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:11.948 [2024-10-16 09:20:36.233147] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59790 ] 00:06:12.208 [2024-10-16 09:20:36.369996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.208 [2024-10-16 09:20:36.412719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.208 [2024-10-16 09:20:36.467170] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:12.208  [2024-10-16T09:20:36.871Z] Copying: 60/60 [kB] (average 19 MBps) 00:06:12.467 00:06:12.467 09:20:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:06:12.467 09:20:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:12.467 09:20:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:12.467 09:20:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:12.467 { 00:06:12.467 "subsystems": [ 00:06:12.467 { 00:06:12.467 "subsystem": "bdev", 00:06:12.467 "config": [ 00:06:12.467 { 00:06:12.467 "params": { 00:06:12.467 "trtype": "pcie", 00:06:12.467 "traddr": "0000:00:10.0", 00:06:12.467 "name": "Nvme0" 00:06:12.467 }, 00:06:12.467 "method": "bdev_nvme_attach_controller" 00:06:12.467 }, 00:06:12.467 { 00:06:12.467 "method": "bdev_wait_for_examine" 00:06:12.467 } 00:06:12.467 ] 00:06:12.467 } 00:06:12.467 ] 00:06:12.467 } 00:06:12.467 [2024-10-16 09:20:36.822443] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:12.467 [2024-10-16 09:20:36.822696] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59803 ] 00:06:12.726 [2024-10-16 09:20:36.960703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.726 [2024-10-16 09:20:36.999609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.726 [2024-10-16 09:20:37.051604] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:12.985  [2024-10-16T09:20:37.389Z] Copying: 60/60 [kB] (average 19 MBps) 00:06:12.985 00:06:12.985 09:20:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:12.985 09:20:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:12.985 09:20:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:12.985 09:20:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:12.985 09:20:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:12.985 09:20:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:12.985 09:20:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:12.985 09:20:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:12.985 09:20:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:12.985 09:20:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:12.985 09:20:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:13.243 { 00:06:13.243 "subsystems": [ 00:06:13.243 { 00:06:13.243 "subsystem": "bdev", 00:06:13.243 "config": [ 00:06:13.243 { 00:06:13.243 "params": { 00:06:13.243 "trtype": "pcie", 00:06:13.243 "traddr": "0000:00:10.0", 00:06:13.243 "name": "Nvme0" 00:06:13.243 }, 00:06:13.243 "method": "bdev_nvme_attach_controller" 00:06:13.243 }, 00:06:13.243 { 00:06:13.243 "method": "bdev_wait_for_examine" 00:06:13.243 } 00:06:13.243 ] 00:06:13.243 } 00:06:13.243 ] 00:06:13.243 } 00:06:13.243 [2024-10-16 09:20:37.398206] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:13.243 [2024-10-16 09:20:37.398447] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59819 ] 00:06:13.243 [2024-10-16 09:20:37.536194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.243 [2024-10-16 09:20:37.576772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.243 [2024-10-16 09:20:37.636258] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:13.502  [2024-10-16T09:20:38.165Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:13.761 00:06:13.761 09:20:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:13.761 09:20:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:13.761 09:20:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:13.761 09:20:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:13.761 09:20:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:13.761 09:20:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:13.761 09:20:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:14.328 09:20:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:06:14.328 09:20:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:14.328 09:20:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:14.328 09:20:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:14.328 [2024-10-16 09:20:38.568791] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:14.328 [2024-10-16 09:20:38.568919] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59838 ] 00:06:14.328 { 00:06:14.328 "subsystems": [ 00:06:14.328 { 00:06:14.328 "subsystem": "bdev", 00:06:14.328 "config": [ 00:06:14.328 { 00:06:14.328 "params": { 00:06:14.328 "trtype": "pcie", 00:06:14.328 "traddr": "0000:00:10.0", 00:06:14.328 "name": "Nvme0" 00:06:14.328 }, 00:06:14.328 "method": "bdev_nvme_attach_controller" 00:06:14.328 }, 00:06:14.328 { 00:06:14.328 "method": "bdev_wait_for_examine" 00:06:14.328 } 00:06:14.328 ] 00:06:14.328 } 00:06:14.328 ] 00:06:14.328 } 00:06:14.328 [2024-10-16 09:20:38.710951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.587 [2024-10-16 09:20:38.763042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.587 [2024-10-16 09:20:38.820154] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:14.587  [2024-10-16T09:20:39.250Z] Copying: 60/60 [kB] (average 58 MBps) 00:06:14.846 00:06:14.846 09:20:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:06:14.846 09:20:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:14.846 09:20:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:14.846 09:20:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:14.846 [2024-10-16 09:20:39.163275] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:14.846 [2024-10-16 09:20:39.163586] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59857 ] 00:06:14.846 { 00:06:14.846 "subsystems": [ 00:06:14.846 { 00:06:14.846 "subsystem": "bdev", 00:06:14.846 "config": [ 00:06:14.846 { 00:06:14.846 "params": { 00:06:14.846 "trtype": "pcie", 00:06:14.846 "traddr": "0000:00:10.0", 00:06:14.846 "name": "Nvme0" 00:06:14.846 }, 00:06:14.846 "method": "bdev_nvme_attach_controller" 00:06:14.846 }, 00:06:14.846 { 00:06:14.846 "method": "bdev_wait_for_examine" 00:06:14.846 } 00:06:14.846 ] 00:06:14.846 } 00:06:14.846 ] 00:06:14.846 } 00:06:15.105 [2024-10-16 09:20:39.295319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.105 [2024-10-16 09:20:39.333450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.105 [2024-10-16 09:20:39.385197] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:15.105  [2024-10-16T09:20:39.768Z] Copying: 60/60 [kB] (average 58 MBps) 00:06:15.364 00:06:15.364 09:20:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:15.364 09:20:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:15.364 09:20:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:15.364 09:20:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:15.364 09:20:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:15.364 09:20:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:15.364 09:20:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:15.364 09:20:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:15.364 09:20:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:15.364 09:20:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:15.364 09:20:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:15.364 { 00:06:15.364 "subsystems": [ 00:06:15.364 { 00:06:15.364 "subsystem": "bdev", 00:06:15.364 "config": [ 00:06:15.364 { 00:06:15.364 "params": { 00:06:15.364 "trtype": "pcie", 00:06:15.364 "traddr": "0000:00:10.0", 00:06:15.364 "name": "Nvme0" 00:06:15.364 }, 00:06:15.364 "method": "bdev_nvme_attach_controller" 00:06:15.364 }, 00:06:15.364 { 00:06:15.364 "method": "bdev_wait_for_examine" 00:06:15.364 } 00:06:15.364 ] 00:06:15.364 } 00:06:15.364 ] 00:06:15.364 } 00:06:15.364 [2024-10-16 09:20:39.728367] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:15.364 [2024-10-16 09:20:39.728638] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59867 ] 00:06:15.623 [2024-10-16 09:20:39.865522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.623 [2024-10-16 09:20:39.912748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.623 [2024-10-16 09:20:39.965667] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:15.881  [2024-10-16T09:20:40.285Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:15.881 00:06:15.881 09:20:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:15.881 09:20:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:15.881 09:20:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:15.881 09:20:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:15.881 09:20:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:15.881 09:20:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:15.881 09:20:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:15.881 09:20:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:16.448 09:20:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:06:16.448 09:20:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:16.448 09:20:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:16.448 09:20:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:16.448 [2024-10-16 09:20:40.847373] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:16.448 [2024-10-16 09:20:40.847712] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59891 ] 00:06:16.708 { 00:06:16.708 "subsystems": [ 00:06:16.708 { 00:06:16.708 "subsystem": "bdev", 00:06:16.708 "config": [ 00:06:16.708 { 00:06:16.708 "params": { 00:06:16.708 "trtype": "pcie", 00:06:16.708 "traddr": "0000:00:10.0", 00:06:16.708 "name": "Nvme0" 00:06:16.708 }, 00:06:16.708 "method": "bdev_nvme_attach_controller" 00:06:16.708 }, 00:06:16.708 { 00:06:16.708 "method": "bdev_wait_for_examine" 00:06:16.708 } 00:06:16.708 ] 00:06:16.708 } 00:06:16.708 ] 00:06:16.708 } 00:06:16.708 [2024-10-16 09:20:40.989053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.708 [2024-10-16 09:20:41.028951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.708 [2024-10-16 09:20:41.083084] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:16.967  [2024-10-16T09:20:41.371Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:16.967 00:06:16.968 09:20:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:06:16.968 09:20:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:16.968 09:20:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:16.968 09:20:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:17.227 [2024-10-16 09:20:41.445984] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:17.227 [2024-10-16 09:20:41.446113] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59905 ] 00:06:17.227 { 00:06:17.227 "subsystems": [ 00:06:17.227 { 00:06:17.227 "subsystem": "bdev", 00:06:17.227 "config": [ 00:06:17.227 { 00:06:17.227 "params": { 00:06:17.227 "trtype": "pcie", 00:06:17.227 "traddr": "0000:00:10.0", 00:06:17.227 "name": "Nvme0" 00:06:17.227 }, 00:06:17.227 "method": "bdev_nvme_attach_controller" 00:06:17.227 }, 00:06:17.227 { 00:06:17.227 "method": "bdev_wait_for_examine" 00:06:17.227 } 00:06:17.227 ] 00:06:17.227 } 00:06:17.227 ] 00:06:17.227 } 00:06:17.227 [2024-10-16 09:20:41.584988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.227 [2024-10-16 09:20:41.628731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.486 [2024-10-16 09:20:41.681290] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:17.486  [2024-10-16T09:20:42.149Z] Copying: 56/56 [kB] (average 27 MBps) 00:06:17.745 00:06:17.745 09:20:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:17.745 09:20:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:17.745 09:20:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:17.745 09:20:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:17.745 09:20:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:17.745 09:20:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:17.745 09:20:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:17.745 09:20:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:17.745 09:20:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:17.745 09:20:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:17.745 09:20:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:17.745 [2024-10-16 09:20:42.041995] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:17.745 [2024-10-16 09:20:42.042091] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59926 ] 00:06:17.745 { 00:06:17.745 "subsystems": [ 00:06:17.745 { 00:06:17.745 "subsystem": "bdev", 00:06:17.745 "config": [ 00:06:17.745 { 00:06:17.745 "params": { 00:06:17.745 "trtype": "pcie", 00:06:17.745 "traddr": "0000:00:10.0", 00:06:17.745 "name": "Nvme0" 00:06:17.745 }, 00:06:17.745 "method": "bdev_nvme_attach_controller" 00:06:17.745 }, 00:06:17.745 { 00:06:17.745 "method": "bdev_wait_for_examine" 00:06:17.745 } 00:06:17.745 ] 00:06:17.745 } 00:06:17.745 ] 00:06:17.745 } 00:06:18.004 [2024-10-16 09:20:42.180996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.004 [2024-10-16 09:20:42.225682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.004 [2024-10-16 09:20:42.281580] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:18.004  [2024-10-16T09:20:42.667Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:18.263 00:06:18.263 09:20:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:18.263 09:20:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:18.263 09:20:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:18.263 09:20:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:18.263 09:20:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:18.263 09:20:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:18.263 09:20:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:18.848 09:20:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:06:18.848 09:20:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:18.848 09:20:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:18.848 09:20:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:18.848 [2024-10-16 09:20:43.172499] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:18.849 [2024-10-16 09:20:43.174716] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59945 ] 00:06:18.849 { 00:06:18.849 "subsystems": [ 00:06:18.849 { 00:06:18.849 "subsystem": "bdev", 00:06:18.849 "config": [ 00:06:18.849 { 00:06:18.849 "params": { 00:06:18.849 "trtype": "pcie", 00:06:18.849 "traddr": "0000:00:10.0", 00:06:18.849 "name": "Nvme0" 00:06:18.849 }, 00:06:18.849 "method": "bdev_nvme_attach_controller" 00:06:18.849 }, 00:06:18.849 { 00:06:18.849 "method": "bdev_wait_for_examine" 00:06:18.849 } 00:06:18.849 ] 00:06:18.849 } 00:06:18.849 ] 00:06:18.849 } 00:06:19.131 [2024-10-16 09:20:43.312611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.131 [2024-10-16 09:20:43.357829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.131 [2024-10-16 09:20:43.412070] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:19.131  [2024-10-16T09:20:43.794Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:19.390 00:06:19.390 09:20:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:06:19.390 09:20:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:19.390 09:20:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:19.390 09:20:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:19.390 [2024-10-16 09:20:43.755183] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:19.390 [2024-10-16 09:20:43.755279] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59953 ] 00:06:19.390 { 00:06:19.390 "subsystems": [ 00:06:19.390 { 00:06:19.390 "subsystem": "bdev", 00:06:19.390 "config": [ 00:06:19.390 { 00:06:19.390 "params": { 00:06:19.390 "trtype": "pcie", 00:06:19.390 "traddr": "0000:00:10.0", 00:06:19.390 "name": "Nvme0" 00:06:19.390 }, 00:06:19.390 "method": "bdev_nvme_attach_controller" 00:06:19.390 }, 00:06:19.390 { 00:06:19.390 "method": "bdev_wait_for_examine" 00:06:19.390 } 00:06:19.390 ] 00:06:19.390 } 00:06:19.390 ] 00:06:19.390 } 00:06:19.649 [2024-10-16 09:20:43.890345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.649 [2024-10-16 09:20:43.937274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.649 [2024-10-16 09:20:43.992934] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:19.908  [2024-10-16T09:20:44.312Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:19.908 00:06:19.908 09:20:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:19.908 09:20:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:19.908 09:20:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:19.908 09:20:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:19.908 09:20:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:19.908 09:20:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:19.908 09:20:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:19.908 09:20:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:19.908 09:20:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:19.908 09:20:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:19.908 09:20:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:20.167 { 00:06:20.167 "subsystems": [ 00:06:20.167 { 00:06:20.167 "subsystem": "bdev", 00:06:20.167 "config": [ 00:06:20.167 { 00:06:20.167 "params": { 00:06:20.167 "trtype": "pcie", 00:06:20.167 "traddr": "0000:00:10.0", 00:06:20.167 "name": "Nvme0" 00:06:20.167 }, 00:06:20.167 "method": "bdev_nvme_attach_controller" 00:06:20.167 }, 00:06:20.167 { 00:06:20.167 "method": "bdev_wait_for_examine" 00:06:20.167 } 00:06:20.167 ] 00:06:20.167 } 00:06:20.167 ] 00:06:20.167 } 00:06:20.167 [2024-10-16 09:20:44.338860] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:20.167 [2024-10-16 09:20:44.338956] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59974 ] 00:06:20.167 [2024-10-16 09:20:44.475494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.167 [2024-10-16 09:20:44.514266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.167 [2024-10-16 09:20:44.565190] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:20.426  [2024-10-16T09:20:45.090Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:20.686 00:06:20.686 09:20:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:20.686 09:20:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:20.686 09:20:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:20.686 09:20:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:20.686 09:20:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:20.686 09:20:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:20.686 09:20:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:20.686 09:20:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:20.945 09:20:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:06:20.945 09:20:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:20.945 09:20:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:20.945 09:20:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:20.945 [2024-10-16 09:20:45.322775] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:20.945 [2024-10-16 09:20:45.322873] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59993 ] 00:06:20.945 { 00:06:20.945 "subsystems": [ 00:06:20.945 { 00:06:20.945 "subsystem": "bdev", 00:06:20.945 "config": [ 00:06:20.945 { 00:06:20.945 "params": { 00:06:20.945 "trtype": "pcie", 00:06:20.945 "traddr": "0000:00:10.0", 00:06:20.945 "name": "Nvme0" 00:06:20.945 }, 00:06:20.945 "method": "bdev_nvme_attach_controller" 00:06:20.945 }, 00:06:20.945 { 00:06:20.945 "method": "bdev_wait_for_examine" 00:06:20.945 } 00:06:20.945 ] 00:06:20.945 } 00:06:20.945 ] 00:06:20.945 } 00:06:21.205 [2024-10-16 09:20:45.463930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.205 [2024-10-16 09:20:45.521249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.205 [2024-10-16 09:20:45.586028] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:21.464  [2024-10-16T09:20:46.127Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:21.723 00:06:21.723 09:20:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:06:21.723 09:20:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:21.723 09:20:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:21.723 09:20:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:21.723 { 00:06:21.723 "subsystems": [ 00:06:21.723 { 00:06:21.723 "subsystem": "bdev", 00:06:21.723 "config": [ 00:06:21.723 { 00:06:21.723 "params": { 00:06:21.723 "trtype": "pcie", 00:06:21.723 "traddr": "0000:00:10.0", 00:06:21.723 "name": "Nvme0" 00:06:21.723 }, 00:06:21.723 "method": "bdev_nvme_attach_controller" 00:06:21.723 }, 00:06:21.723 { 00:06:21.723 "method": "bdev_wait_for_examine" 00:06:21.723 } 00:06:21.723 ] 00:06:21.723 } 00:06:21.723 ] 00:06:21.723 } 00:06:21.723 [2024-10-16 09:20:45.977240] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:21.723 [2024-10-16 09:20:45.977334] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60001 ] 00:06:21.723 [2024-10-16 09:20:46.116835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.982 [2024-10-16 09:20:46.177885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.982 [2024-10-16 09:20:46.242214] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:21.982  [2024-10-16T09:20:46.645Z] Copying: 48/48 [kB] (average 23 MBps) 00:06:22.241 00:06:22.241 09:20:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:22.241 09:20:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:22.241 09:20:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:22.241 09:20:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:22.241 09:20:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:22.241 09:20:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:22.241 09:20:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:22.241 09:20:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:22.241 09:20:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:22.241 09:20:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:22.241 09:20:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:22.241 { 00:06:22.241 "subsystems": [ 00:06:22.241 { 00:06:22.241 "subsystem": "bdev", 00:06:22.241 "config": [ 00:06:22.241 { 00:06:22.241 "params": { 00:06:22.241 "trtype": "pcie", 00:06:22.241 "traddr": "0000:00:10.0", 00:06:22.241 "name": "Nvme0" 00:06:22.241 }, 00:06:22.241 "method": "bdev_nvme_attach_controller" 00:06:22.241 }, 00:06:22.241 { 00:06:22.241 "method": "bdev_wait_for_examine" 00:06:22.241 } 00:06:22.241 ] 00:06:22.241 } 00:06:22.241 ] 00:06:22.241 } 00:06:22.241 [2024-10-16 09:20:46.636002] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:22.241 [2024-10-16 09:20:46.636108] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60022 ] 00:06:22.500 [2024-10-16 09:20:46.776490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.500 [2024-10-16 09:20:46.841369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.759 [2024-10-16 09:20:46.906843] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:22.759  [2024-10-16T09:20:47.425Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:23.021 00:06:23.021 09:20:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:23.021 09:20:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:23.021 09:20:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:23.021 09:20:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:23.021 09:20:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:23.021 09:20:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:23.021 09:20:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:23.589 09:20:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:06:23.589 09:20:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:23.589 09:20:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:23.589 09:20:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:23.589 [2024-10-16 09:20:47.807410] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:23.589 [2024-10-16 09:20:47.807507] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60041 ] 00:06:23.589 { 00:06:23.589 "subsystems": [ 00:06:23.589 { 00:06:23.589 "subsystem": "bdev", 00:06:23.589 "config": [ 00:06:23.589 { 00:06:23.589 "params": { 00:06:23.589 "trtype": "pcie", 00:06:23.589 "traddr": "0000:00:10.0", 00:06:23.589 "name": "Nvme0" 00:06:23.589 }, 00:06:23.589 "method": "bdev_nvme_attach_controller" 00:06:23.589 }, 00:06:23.589 { 00:06:23.589 "method": "bdev_wait_for_examine" 00:06:23.589 } 00:06:23.589 ] 00:06:23.589 } 00:06:23.589 ] 00:06:23.589 } 00:06:23.589 [2024-10-16 09:20:47.948396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.848 [2024-10-16 09:20:48.009403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.848 [2024-10-16 09:20:48.067682] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:23.848  [2024-10-16T09:20:48.511Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:24.107 00:06:24.107 09:20:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:24.107 09:20:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:06:24.107 09:20:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:24.107 09:20:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:24.107 { 00:06:24.107 "subsystems": [ 00:06:24.107 { 00:06:24.107 "subsystem": "bdev", 00:06:24.107 "config": [ 00:06:24.107 { 00:06:24.107 "params": { 00:06:24.107 "trtype": "pcie", 00:06:24.107 "traddr": "0000:00:10.0", 00:06:24.107 "name": "Nvme0" 00:06:24.107 }, 00:06:24.107 "method": "bdev_nvme_attach_controller" 00:06:24.107 }, 00:06:24.107 { 00:06:24.107 "method": "bdev_wait_for_examine" 00:06:24.107 } 00:06:24.107 ] 00:06:24.107 } 00:06:24.107 ] 00:06:24.107 } 00:06:24.107 [2024-10-16 09:20:48.444258] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:24.107 [2024-10-16 09:20:48.444390] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60060 ] 00:06:24.366 [2024-10-16 09:20:48.584267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.366 [2024-10-16 09:20:48.647907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.366 [2024-10-16 09:20:48.710160] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:24.625  [2024-10-16T09:20:49.288Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:24.884 00:06:24.884 09:20:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:24.884 09:20:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:24.884 09:20:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:24.884 09:20:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:24.884 09:20:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:24.884 09:20:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:24.884 09:20:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:24.884 09:20:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:24.884 09:20:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:24.884 09:20:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:24.884 09:20:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:24.884 [2024-10-16 09:20:49.103116] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:24.884 [2024-10-16 09:20:49.103261] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60070 ] 00:06:24.884 { 00:06:24.884 "subsystems": [ 00:06:24.884 { 00:06:24.884 "subsystem": "bdev", 00:06:24.884 "config": [ 00:06:24.884 { 00:06:24.884 "params": { 00:06:24.884 "trtype": "pcie", 00:06:24.884 "traddr": "0000:00:10.0", 00:06:24.884 "name": "Nvme0" 00:06:24.884 }, 00:06:24.884 "method": "bdev_nvme_attach_controller" 00:06:24.884 }, 00:06:24.884 { 00:06:24.884 "method": "bdev_wait_for_examine" 00:06:24.884 } 00:06:24.884 ] 00:06:24.884 } 00:06:24.884 ] 00:06:24.884 } 00:06:24.884 [2024-10-16 09:20:49.244708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.143 [2024-10-16 09:20:49.312506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.143 [2024-10-16 09:20:49.376841] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:25.143  [2024-10-16T09:20:49.805Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:25.401 00:06:25.401 00:06:25.401 real 0m14.116s 00:06:25.401 user 0m10.192s 00:06:25.401 sys 0m5.577s 00:06:25.401 09:20:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.401 09:20:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:25.401 ************************************ 00:06:25.401 END TEST dd_rw 00:06:25.401 ************************************ 00:06:25.401 09:20:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:06:25.401 09:20:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:25.401 09:20:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.401 09:20:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:25.401 ************************************ 00:06:25.401 START TEST dd_rw_offset 00:06:25.401 ************************************ 00:06:25.401 09:20:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1125 -- # basic_offset 00:06:25.401 09:20:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:06:25.401 09:20:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:06:25.401 09:20:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:06:25.401 09:20:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:25.661 09:20:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:06:25.661 09:20:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=34um3h8x651suhlgm84e142c5uxs2cawu6xxts8mgryrl0684ar2mzddnukkog9ygtyqgkzep9iqcug0xiu30c5jmw1rwzxrrt2kurp8t88v5iap0hs0ubu00tta0f2h62qk2gttik3bs8b8oqdqmloxqwq4c6ol3phc5gdjjztfp6ial8xge5p251nx1lvz5el0g9bploadw2hmlk14f964b7r9juaimnm8dudffl4rvix0m1wy7vdiz8777vp74feyti8m9x1nlle2hflkc1m8cpv466n3wvbjg5mrlqpis0v8ajxscovccb65u388u27xgtc9fnxp7dfjk1rcwsf6u1yzl4jevhhvj4d6cfvan9oja0crd6xhyznawsjo8bmkwkf3m4otigxmjyud99tw5ofyseemie1kdyczz7xvd4cnpeglnrktpk183o9nb1gb5op58ieo70kb03y12pvsoxe9tuzkbw3omf3acrhqzvi50wvqdues46pp8ua1mbezg9jdhkh205ig622cavuxsmo932peiu3c5sbfaslscl9mfkle9wnl2masou57mfmijqd491391w70sl1gq34vcgzqagaitv6eonsy7zdw98y2i537pbjrffxjm0sfwmsztdcaxorpu04p3jnairkjojqwmdhdfjlqy0snb4ftwjwexd4rh1uc62ofvx213rm7pucov8xpaxlvuwzqydwobw5wy79ypnh39f0k36htm7yyit66sf2r1kjhghuxexpd99q0wc9it6ne22phq5y4l84bs31dn7yb14x1pyexetdm7aabck1qcl7wreagmwnjn5n8d0dkcv65acx65km97wy5apdsb0wzamm1nwq03lq27d3v7b6dny2x4vjrnm0yem9052uyx0gje3pk7n5ucnfe9aapdwd7dfjsei392mncs7t95u9dmf67bznn4qdmauba1tmkxab9zb13lsqkjh0vr96p35tn9648z9zeth8lkgacvm9uhcqo10niu579ny8o4n55yk46ndp6n2epvl7s8lvq9j2skw1a6hsazshkdup9zmui775sg54sjsg81tx02uvp6snc5te9c202xdxrswwt8da8aci2qmh2ci99fdepcmg3jg7249730iwp41fsp9m4rmgb6u74thpoanzhjui7rkvvu7enog9qjg2n76g57758pdzz6qowil0aiyzpfk6blxtlsgg4yt17b06xk3ztuvmr62g97wdl3zv9u3kbjkjbu4f0ifw83gbgpl6g8p6xal0cyv9huij497nfa0pobn7wz77afsic5r3fbf42625vkheoc7ygxotbgyowryqw0samyerfonzpmj9ksrgqk3utw90574hhdykvk63j7siirvghmnjzpne4jx9feypqhwdrycu59k4kak240okfl9t6m7wvwc49mdm57su0regj05x9moziv7bm06f8djsqu0arsu5q66qgr8dcccvorr3b5cmklx1ndpss10i1jaukr84wxxt0dmp7az9x6ryxzya7b874w3tx5fjrvr2hxbqm54hniljrnb6kbtuyb5fo17pwgojqf1978qw3atkf5ntond9f95leg9uhx0p6oz67h9z95kfoqxbx6zbffhj888tfbf0aguc9jb96gw6d5j8ih26to4yamru5sqrtp0q62cdmww6osb2pdon122se723xyjquvtsyd88i8l5q8x2afskl0zp32co60es0oz6k8k3dc5ibxlngp3om05zgbq0slwayyo9e0oc5tqvqkrvrrsduhyx3s9t4dofasl8qp39wgot84uogq60w9qu1mv0yt2v14gcjh8avn88jw9fi6hq4rwsk48xzq7nhc7qmbc07xs0cgh7zotyj752a7p5sjdtqs4pln9qd615qa22ax09f5n65xeih3mzaaq2gpes8w7n32k0092w5leh1f1zgko1klyuqlelsfe2revefhch3vpk5c2l9jk1ojl9ypxcjulngvn3v3fnytoxjyj8vrdtrmg6rheuxabqokt9f4abh061cf0tlw5wq6k61s0kleokfmao3ikpp53c362npif0qn4rwqnryikdxtec3rmhdbz67k65fzyve84tejin41vkly0qd275rcjr08nel30r1twxsr8hdgoklwuxwb6opizunas4ua4pw2v9cr1oqj4v273aua1l0ajpn1ydle1rbv8roj8qdb3qt9951pmm46dhklgq0g6u00cjmk8j1pngk99e1wuqjegoj1kl1i9rh0dmqgato6r0ku9sez1573mlwm38rvk381iah5wtg09hucr4kuot3bwdofkk4gjdkgastg64rg45fxdxba1pkxz1j2jnwesc8xujdso1qaqw3oczuucnjmw5yv70md2sqab26g2qo90h3r1konalbzbuwahz84kpu0zkbl4x80bsgkq4l8asscu7f5d4e7ogc3y8jpz69ctcq5kd20b2zd3kxj793o24mxqohgzpkj36sznphn9fqqix9cziq9oawoxpyt1gniae7udkwpwt8vkta3ueljoktl1vvmydm728uvzi0km86pk0a9hq0rq3k0ty8upp0jgq02eyrtr40ytxpre4usnstzf8ojmxf7730g4zk8w5562xz9sqbsnu1zf5yzqmka6fmfu3xzia86fsl2madf1hz835yclxpeh5lobj5eod3bllgzx3fw9uf5zct0ugaa1lck88i1umd77o4bt5v1j6bygbgu9775z1g787cbkqo7vi23ayhnff8bawns14iyydosxrw72jolire9yrzu07jwpxlwwf1m7nl2xsnf4ux1bse2uroxnice01da1fb23y9iza2teny5js8qetmduhxsgsxeafjtlz2pkxa9j2qxl5bbdnigf38cmywffw1rpnhn943rs2q0a4m8gqebxszvxdqhx43ew70b6cdklr3msanhkeyo247b2zjypzyqys5he6evcd86zqww6u7n9i02gffo6jgor48bfscyjjlqxzzzpy0uzjoejk4wyimorq5earkzv46wfywqldb8pdk19ln1ndtmdbyitl9a6487nn1y6pyg7jhj9p52xssn73dwjz2evawz9i30om0iufjt85zh6fbks2m1u4w1xzzvg00tznhg20kk2s3mk5yqqtjxri7uktxdvj6xjdeh73de6untggagx3lzor25wdtwcg9cfzyzfp3io6vkezzf8wranbwesgn2vzlx8u21s2da5un55zfyaefkdw8o9wo12d9gr8v3rhalznz9x4ifcn8ep311p5wv1zhveygc4fqebh23l8b4nddb92p0xk5fazietfnxz37ly7yah5qtozrbzr3s6abkmsemivbsfu40ctz76khn2lr4rmyl6j5b9fcjk6s7z373vwsd6f0k4rqy4pwcntdcgv7fb12b8294lwa95awv6t68hl7xp2anwa1lot0wkca8f8xlj87id6x4ot2sjxvy7m29ruugbz6rdhbdpi5xbk88cf8dplomel47sml7letmzulb0ydemn2qnthlggqw29xbpngcvrnx14xgjusg4krd4prgkylrmw951i2j940wl3le4oubydbobmrvejhf86cwzzg7dtt71eykr8i281chyqr64vi270hkol7wd9zuqb9jmiarq3a2l66sy5jl2xrrxaq7hbgrfc8dubfpvst0cq49h85b3bkz633g3bonwmbkf16aetl53binyupov2yfnzxojse254kdt9jff1r0o0tmekto525eqzuhty80f7w6m39ly3yltja4gorjnewspolukft65fv495v8q5j58a6rqudycrdkxr522epy9qgl6k95aatcmoj1dpt2e9il1v42ndfaccd2c37m1snivkt9m9isnb3369xkyim3qlrcv8xj23eo27r4kp1gs8zrkx1i3cpxnpgjwqnbze8kefn26ajn9z6wllmtmakocdpjw62b0cawid4v7ranpvfru5edtrjms31bu28gib3ki9px88qarh0vxaqqsj2 00:06:25.661 09:20:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:06:25.661 09:20:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:06:25.661 09:20:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:25.661 09:20:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:25.661 [2024-10-16 09:20:49.872775] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:25.661 [2024-10-16 09:20:49.872877] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60106 ] 00:06:25.661 { 00:06:25.661 "subsystems": [ 00:06:25.661 { 00:06:25.661 "subsystem": "bdev", 00:06:25.661 "config": [ 00:06:25.661 { 00:06:25.661 "params": { 00:06:25.661 "trtype": "pcie", 00:06:25.661 "traddr": "0000:00:10.0", 00:06:25.661 "name": "Nvme0" 00:06:25.661 }, 00:06:25.661 "method": "bdev_nvme_attach_controller" 00:06:25.661 }, 00:06:25.661 { 00:06:25.661 "method": "bdev_wait_for_examine" 00:06:25.661 } 00:06:25.661 ] 00:06:25.661 } 00:06:25.661 ] 00:06:25.661 } 00:06:25.661 [2024-10-16 09:20:50.014658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.920 [2024-10-16 09:20:50.085686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.920 [2024-10-16 09:20:50.152206] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:25.920  [2024-10-16T09:20:50.583Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:06:26.179 00:06:26.179 09:20:50 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:06:26.179 09:20:50 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:06:26.179 09:20:50 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:26.179 09:20:50 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:26.179 [2024-10-16 09:20:50.530562] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:26.179 [2024-10-16 09:20:50.531730] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60125 ] 00:06:26.179 { 00:06:26.179 "subsystems": [ 00:06:26.179 { 00:06:26.179 "subsystem": "bdev", 00:06:26.179 "config": [ 00:06:26.179 { 00:06:26.179 "params": { 00:06:26.179 "trtype": "pcie", 00:06:26.179 "traddr": "0000:00:10.0", 00:06:26.179 "name": "Nvme0" 00:06:26.179 }, 00:06:26.179 "method": "bdev_nvme_attach_controller" 00:06:26.179 }, 00:06:26.179 { 00:06:26.179 "method": "bdev_wait_for_examine" 00:06:26.179 } 00:06:26.179 ] 00:06:26.179 } 00:06:26.179 ] 00:06:26.179 } 00:06:26.438 [2024-10-16 09:20:50.673097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.438 [2024-10-16 09:20:50.712389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.438 [2024-10-16 09:20:50.768004] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:26.697  [2024-10-16T09:20:51.101Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:06:26.697 00:06:26.697 09:20:51 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:06:26.698 09:20:51 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ 34um3h8x651suhlgm84e142c5uxs2cawu6xxts8mgryrl0684ar2mzddnukkog9ygtyqgkzep9iqcug0xiu30c5jmw1rwzxrrt2kurp8t88v5iap0hs0ubu00tta0f2h62qk2gttik3bs8b8oqdqmloxqwq4c6ol3phc5gdjjztfp6ial8xge5p251nx1lvz5el0g9bploadw2hmlk14f964b7r9juaimnm8dudffl4rvix0m1wy7vdiz8777vp74feyti8m9x1nlle2hflkc1m8cpv466n3wvbjg5mrlqpis0v8ajxscovccb65u388u27xgtc9fnxp7dfjk1rcwsf6u1yzl4jevhhvj4d6cfvan9oja0crd6xhyznawsjo8bmkwkf3m4otigxmjyud99tw5ofyseemie1kdyczz7xvd4cnpeglnrktpk183o9nb1gb5op58ieo70kb03y12pvsoxe9tuzkbw3omf3acrhqzvi50wvqdues46pp8ua1mbezg9jdhkh205ig622cavuxsmo932peiu3c5sbfaslscl9mfkle9wnl2masou57mfmijqd491391w70sl1gq34vcgzqagaitv6eonsy7zdw98y2i537pbjrffxjm0sfwmsztdcaxorpu04p3jnairkjojqwmdhdfjlqy0snb4ftwjwexd4rh1uc62ofvx213rm7pucov8xpaxlvuwzqydwobw5wy79ypnh39f0k36htm7yyit66sf2r1kjhghuxexpd99q0wc9it6ne22phq5y4l84bs31dn7yb14x1pyexetdm7aabck1qcl7wreagmwnjn5n8d0dkcv65acx65km97wy5apdsb0wzamm1nwq03lq27d3v7b6dny2x4vjrnm0yem9052uyx0gje3pk7n5ucnfe9aapdwd7dfjsei392mncs7t95u9dmf67bznn4qdmauba1tmkxab9zb13lsqkjh0vr96p35tn9648z9zeth8lkgacvm9uhcqo10niu579ny8o4n55yk46ndp6n2epvl7s8lvq9j2skw1a6hsazshkdup9zmui775sg54sjsg81tx02uvp6snc5te9c202xdxrswwt8da8aci2qmh2ci99fdepcmg3jg7249730iwp41fsp9m4rmgb6u74thpoanzhjui7rkvvu7enog9qjg2n76g57758pdzz6qowil0aiyzpfk6blxtlsgg4yt17b06xk3ztuvmr62g97wdl3zv9u3kbjkjbu4f0ifw83gbgpl6g8p6xal0cyv9huij497nfa0pobn7wz77afsic5r3fbf42625vkheoc7ygxotbgyowryqw0samyerfonzpmj9ksrgqk3utw90574hhdykvk63j7siirvghmnjzpne4jx9feypqhwdrycu59k4kak240okfl9t6m7wvwc49mdm57su0regj05x9moziv7bm06f8djsqu0arsu5q66qgr8dcccvorr3b5cmklx1ndpss10i1jaukr84wxxt0dmp7az9x6ryxzya7b874w3tx5fjrvr2hxbqm54hniljrnb6kbtuyb5fo17pwgojqf1978qw3atkf5ntond9f95leg9uhx0p6oz67h9z95kfoqxbx6zbffhj888tfbf0aguc9jb96gw6d5j8ih26to4yamru5sqrtp0q62cdmww6osb2pdon122se723xyjquvtsyd88i8l5q8x2afskl0zp32co60es0oz6k8k3dc5ibxlngp3om05zgbq0slwayyo9e0oc5tqvqkrvrrsduhyx3s9t4dofasl8qp39wgot84uogq60w9qu1mv0yt2v14gcjh8avn88jw9fi6hq4rwsk48xzq7nhc7qmbc07xs0cgh7zotyj752a7p5sjdtqs4pln9qd615qa22ax09f5n65xeih3mzaaq2gpes8w7n32k0092w5leh1f1zgko1klyuqlelsfe2revefhch3vpk5c2l9jk1ojl9ypxcjulngvn3v3fnytoxjyj8vrdtrmg6rheuxabqokt9f4abh061cf0tlw5wq6k61s0kleokfmao3ikpp53c362npif0qn4rwqnryikdxtec3rmhdbz67k65fzyve84tejin41vkly0qd275rcjr08nel30r1twxsr8hdgoklwuxwb6opizunas4ua4pw2v9cr1oqj4v273aua1l0ajpn1ydle1rbv8roj8qdb3qt9951pmm46dhklgq0g6u00cjmk8j1pngk99e1wuqjegoj1kl1i9rh0dmqgato6r0ku9sez1573mlwm38rvk381iah5wtg09hucr4kuot3bwdofkk4gjdkgastg64rg45fxdxba1pkxz1j2jnwesc8xujdso1qaqw3oczuucnjmw5yv70md2sqab26g2qo90h3r1konalbzbuwahz84kpu0zkbl4x80bsgkq4l8asscu7f5d4e7ogc3y8jpz69ctcq5kd20b2zd3kxj793o24mxqohgzpkj36sznphn9fqqix9cziq9oawoxpyt1gniae7udkwpwt8vkta3ueljoktl1vvmydm728uvzi0km86pk0a9hq0rq3k0ty8upp0jgq02eyrtr40ytxpre4usnstzf8ojmxf7730g4zk8w5562xz9sqbsnu1zf5yzqmka6fmfu3xzia86fsl2madf1hz835yclxpeh5lobj5eod3bllgzx3fw9uf5zct0ugaa1lck88i1umd77o4bt5v1j6bygbgu9775z1g787cbkqo7vi23ayhnff8bawns14iyydosxrw72jolire9yrzu07jwpxlwwf1m7nl2xsnf4ux1bse2uroxnice01da1fb23y9iza2teny5js8qetmduhxsgsxeafjtlz2pkxa9j2qxl5bbdnigf38cmywffw1rpnhn943rs2q0a4m8gqebxszvxdqhx43ew70b6cdklr3msanhkeyo247b2zjypzyqys5he6evcd86zqww6u7n9i02gffo6jgor48bfscyjjlqxzzzpy0uzjoejk4wyimorq5earkzv46wfywqldb8pdk19ln1ndtmdbyitl9a6487nn1y6pyg7jhj9p52xssn73dwjz2evawz9i30om0iufjt85zh6fbks2m1u4w1xzzvg00tznhg20kk2s3mk5yqqtjxri7uktxdvj6xjdeh73de6untggagx3lzor25wdtwcg9cfzyzfp3io6vkezzf8wranbwesgn2vzlx8u21s2da5un55zfyaefkdw8o9wo12d9gr8v3rhalznz9x4ifcn8ep311p5wv1zhveygc4fqebh23l8b4nddb92p0xk5fazietfnxz37ly7yah5qtozrbzr3s6abkmsemivbsfu40ctz76khn2lr4rmyl6j5b9fcjk6s7z373vwsd6f0k4rqy4pwcntdcgv7fb12b8294lwa95awv6t68hl7xp2anwa1lot0wkca8f8xlj87id6x4ot2sjxvy7m29ruugbz6rdhbdpi5xbk88cf8dplomel47sml7letmzulb0ydemn2qnthlggqw29xbpngcvrnx14xgjusg4krd4prgkylrmw951i2j940wl3le4oubydbobmrvejhf86cwzzg7dtt71eykr8i281chyqr64vi270hkol7wd9zuqb9jmiarq3a2l66sy5jl2xrrxaq7hbgrfc8dubfpvst0cq49h85b3bkz633g3bonwmbkf16aetl53binyupov2yfnzxojse254kdt9jff1r0o0tmekto525eqzuhty80f7w6m39ly3yltja4gorjnewspolukft65fv495v8q5j58a6rqudycrdkxr522epy9qgl6k95aatcmoj1dpt2e9il1v42ndfaccd2c37m1snivkt9m9isnb3369xkyim3qlrcv8xj23eo27r4kp1gs8zrkx1i3cpxnpgjwqnbze8kefn26ajn9z6wllmtmakocdpjw62b0cawid4v7ranpvfru5edtrjms31bu28gib3ki9px88qarh0vxaqqsj2 == \3\4\u\m\3\h\8\x\6\5\1\s\u\h\l\g\m\8\4\e\1\4\2\c\5\u\x\s\2\c\a\w\u\6\x\x\t\s\8\m\g\r\y\r\l\0\6\8\4\a\r\2\m\z\d\d\n\u\k\k\o\g\9\y\g\t\y\q\g\k\z\e\p\9\i\q\c\u\g\0\x\i\u\3\0\c\5\j\m\w\1\r\w\z\x\r\r\t\2\k\u\r\p\8\t\8\8\v\5\i\a\p\0\h\s\0\u\b\u\0\0\t\t\a\0\f\2\h\6\2\q\k\2\g\t\t\i\k\3\b\s\8\b\8\o\q\d\q\m\l\o\x\q\w\q\4\c\6\o\l\3\p\h\c\5\g\d\j\j\z\t\f\p\6\i\a\l\8\x\g\e\5\p\2\5\1\n\x\1\l\v\z\5\e\l\0\g\9\b\p\l\o\a\d\w\2\h\m\l\k\1\4\f\9\6\4\b\7\r\9\j\u\a\i\m\n\m\8\d\u\d\f\f\l\4\r\v\i\x\0\m\1\w\y\7\v\d\i\z\8\7\7\7\v\p\7\4\f\e\y\t\i\8\m\9\x\1\n\l\l\e\2\h\f\l\k\c\1\m\8\c\p\v\4\6\6\n\3\w\v\b\j\g\5\m\r\l\q\p\i\s\0\v\8\a\j\x\s\c\o\v\c\c\b\6\5\u\3\8\8\u\2\7\x\g\t\c\9\f\n\x\p\7\d\f\j\k\1\r\c\w\s\f\6\u\1\y\z\l\4\j\e\v\h\h\v\j\4\d\6\c\f\v\a\n\9\o\j\a\0\c\r\d\6\x\h\y\z\n\a\w\s\j\o\8\b\m\k\w\k\f\3\m\4\o\t\i\g\x\m\j\y\u\d\9\9\t\w\5\o\f\y\s\e\e\m\i\e\1\k\d\y\c\z\z\7\x\v\d\4\c\n\p\e\g\l\n\r\k\t\p\k\1\8\3\o\9\n\b\1\g\b\5\o\p\5\8\i\e\o\7\0\k\b\0\3\y\1\2\p\v\s\o\x\e\9\t\u\z\k\b\w\3\o\m\f\3\a\c\r\h\q\z\v\i\5\0\w\v\q\d\u\e\s\4\6\p\p\8\u\a\1\m\b\e\z\g\9\j\d\h\k\h\2\0\5\i\g\6\2\2\c\a\v\u\x\s\m\o\9\3\2\p\e\i\u\3\c\5\s\b\f\a\s\l\s\c\l\9\m\f\k\l\e\9\w\n\l\2\m\a\s\o\u\5\7\m\f\m\i\j\q\d\4\9\1\3\9\1\w\7\0\s\l\1\g\q\3\4\v\c\g\z\q\a\g\a\i\t\v\6\e\o\n\s\y\7\z\d\w\9\8\y\2\i\5\3\7\p\b\j\r\f\f\x\j\m\0\s\f\w\m\s\z\t\d\c\a\x\o\r\p\u\0\4\p\3\j\n\a\i\r\k\j\o\j\q\w\m\d\h\d\f\j\l\q\y\0\s\n\b\4\f\t\w\j\w\e\x\d\4\r\h\1\u\c\6\2\o\f\v\x\2\1\3\r\m\7\p\u\c\o\v\8\x\p\a\x\l\v\u\w\z\q\y\d\w\o\b\w\5\w\y\7\9\y\p\n\h\3\9\f\0\k\3\6\h\t\m\7\y\y\i\t\6\6\s\f\2\r\1\k\j\h\g\h\u\x\e\x\p\d\9\9\q\0\w\c\9\i\t\6\n\e\2\2\p\h\q\5\y\4\l\8\4\b\s\3\1\d\n\7\y\b\1\4\x\1\p\y\e\x\e\t\d\m\7\a\a\b\c\k\1\q\c\l\7\w\r\e\a\g\m\w\n\j\n\5\n\8\d\0\d\k\c\v\6\5\a\c\x\6\5\k\m\9\7\w\y\5\a\p\d\s\b\0\w\z\a\m\m\1\n\w\q\0\3\l\q\2\7\d\3\v\7\b\6\d\n\y\2\x\4\v\j\r\n\m\0\y\e\m\9\0\5\2\u\y\x\0\g\j\e\3\p\k\7\n\5\u\c\n\f\e\9\a\a\p\d\w\d\7\d\f\j\s\e\i\3\9\2\m\n\c\s\7\t\9\5\u\9\d\m\f\6\7\b\z\n\n\4\q\d\m\a\u\b\a\1\t\m\k\x\a\b\9\z\b\1\3\l\s\q\k\j\h\0\v\r\9\6\p\3\5\t\n\9\6\4\8\z\9\z\e\t\h\8\l\k\g\a\c\v\m\9\u\h\c\q\o\1\0\n\i\u\5\7\9\n\y\8\o\4\n\5\5\y\k\4\6\n\d\p\6\n\2\e\p\v\l\7\s\8\l\v\q\9\j\2\s\k\w\1\a\6\h\s\a\z\s\h\k\d\u\p\9\z\m\u\i\7\7\5\s\g\5\4\s\j\s\g\8\1\t\x\0\2\u\v\p\6\s\n\c\5\t\e\9\c\2\0\2\x\d\x\r\s\w\w\t\8\d\a\8\a\c\i\2\q\m\h\2\c\i\9\9\f\d\e\p\c\m\g\3\j\g\7\2\4\9\7\3\0\i\w\p\4\1\f\s\p\9\m\4\r\m\g\b\6\u\7\4\t\h\p\o\a\n\z\h\j\u\i\7\r\k\v\v\u\7\e\n\o\g\9\q\j\g\2\n\7\6\g\5\7\7\5\8\p\d\z\z\6\q\o\w\i\l\0\a\i\y\z\p\f\k\6\b\l\x\t\l\s\g\g\4\y\t\1\7\b\0\6\x\k\3\z\t\u\v\m\r\6\2\g\9\7\w\d\l\3\z\v\9\u\3\k\b\j\k\j\b\u\4\f\0\i\f\w\8\3\g\b\g\p\l\6\g\8\p\6\x\a\l\0\c\y\v\9\h\u\i\j\4\9\7\n\f\a\0\p\o\b\n\7\w\z\7\7\a\f\s\i\c\5\r\3\f\b\f\4\2\6\2\5\v\k\h\e\o\c\7\y\g\x\o\t\b\g\y\o\w\r\y\q\w\0\s\a\m\y\e\r\f\o\n\z\p\m\j\9\k\s\r\g\q\k\3\u\t\w\9\0\5\7\4\h\h\d\y\k\v\k\6\3\j\7\s\i\i\r\v\g\h\m\n\j\z\p\n\e\4\j\x\9\f\e\y\p\q\h\w\d\r\y\c\u\5\9\k\4\k\a\k\2\4\0\o\k\f\l\9\t\6\m\7\w\v\w\c\4\9\m\d\m\5\7\s\u\0\r\e\g\j\0\5\x\9\m\o\z\i\v\7\b\m\0\6\f\8\d\j\s\q\u\0\a\r\s\u\5\q\6\6\q\g\r\8\d\c\c\c\v\o\r\r\3\b\5\c\m\k\l\x\1\n\d\p\s\s\1\0\i\1\j\a\u\k\r\8\4\w\x\x\t\0\d\m\p\7\a\z\9\x\6\r\y\x\z\y\a\7\b\8\7\4\w\3\t\x\5\f\j\r\v\r\2\h\x\b\q\m\5\4\h\n\i\l\j\r\n\b\6\k\b\t\u\y\b\5\f\o\1\7\p\w\g\o\j\q\f\1\9\7\8\q\w\3\a\t\k\f\5\n\t\o\n\d\9\f\9\5\l\e\g\9\u\h\x\0\p\6\o\z\6\7\h\9\z\9\5\k\f\o\q\x\b\x\6\z\b\f\f\h\j\8\8\8\t\f\b\f\0\a\g\u\c\9\j\b\9\6\g\w\6\d\5\j\8\i\h\2\6\t\o\4\y\a\m\r\u\5\s\q\r\t\p\0\q\6\2\c\d\m\w\w\6\o\s\b\2\p\d\o\n\1\2\2\s\e\7\2\3\x\y\j\q\u\v\t\s\y\d\8\8\i\8\l\5\q\8\x\2\a\f\s\k\l\0\z\p\3\2\c\o\6\0\e\s\0\o\z\6\k\8\k\3\d\c\5\i\b\x\l\n\g\p\3\o\m\0\5\z\g\b\q\0\s\l\w\a\y\y\o\9\e\0\o\c\5\t\q\v\q\k\r\v\r\r\s\d\u\h\y\x\3\s\9\t\4\d\o\f\a\s\l\8\q\p\3\9\w\g\o\t\8\4\u\o\g\q\6\0\w\9\q\u\1\m\v\0\y\t\2\v\1\4\g\c\j\h\8\a\v\n\8\8\j\w\9\f\i\6\h\q\4\r\w\s\k\4\8\x\z\q\7\n\h\c\7\q\m\b\c\0\7\x\s\0\c\g\h\7\z\o\t\y\j\7\5\2\a\7\p\5\s\j\d\t\q\s\4\p\l\n\9\q\d\6\1\5\q\a\2\2\a\x\0\9\f\5\n\6\5\x\e\i\h\3\m\z\a\a\q\2\g\p\e\s\8\w\7\n\3\2\k\0\0\9\2\w\5\l\e\h\1\f\1\z\g\k\o\1\k\l\y\u\q\l\e\l\s\f\e\2\r\e\v\e\f\h\c\h\3\v\p\k\5\c\2\l\9\j\k\1\o\j\l\9\y\p\x\c\j\u\l\n\g\v\n\3\v\3\f\n\y\t\o\x\j\y\j\8\v\r\d\t\r\m\g\6\r\h\e\u\x\a\b\q\o\k\t\9\f\4\a\b\h\0\6\1\c\f\0\t\l\w\5\w\q\6\k\6\1\s\0\k\l\e\o\k\f\m\a\o\3\i\k\p\p\5\3\c\3\6\2\n\p\i\f\0\q\n\4\r\w\q\n\r\y\i\k\d\x\t\e\c\3\r\m\h\d\b\z\6\7\k\6\5\f\z\y\v\e\8\4\t\e\j\i\n\4\1\v\k\l\y\0\q\d\2\7\5\r\c\j\r\0\8\n\e\l\3\0\r\1\t\w\x\s\r\8\h\d\g\o\k\l\w\u\x\w\b\6\o\p\i\z\u\n\a\s\4\u\a\4\p\w\2\v\9\c\r\1\o\q\j\4\v\2\7\3\a\u\a\1\l\0\a\j\p\n\1\y\d\l\e\1\r\b\v\8\r\o\j\8\q\d\b\3\q\t\9\9\5\1\p\m\m\4\6\d\h\k\l\g\q\0\g\6\u\0\0\c\j\m\k\8\j\1\p\n\g\k\9\9\e\1\w\u\q\j\e\g\o\j\1\k\l\1\i\9\r\h\0\d\m\q\g\a\t\o\6\r\0\k\u\9\s\e\z\1\5\7\3\m\l\w\m\3\8\r\v\k\3\8\1\i\a\h\5\w\t\g\0\9\h\u\c\r\4\k\u\o\t\3\b\w\d\o\f\k\k\4\g\j\d\k\g\a\s\t\g\6\4\r\g\4\5\f\x\d\x\b\a\1\p\k\x\z\1\j\2\j\n\w\e\s\c\8\x\u\j\d\s\o\1\q\a\q\w\3\o\c\z\u\u\c\n\j\m\w\5\y\v\7\0\m\d\2\s\q\a\b\2\6\g\2\q\o\9\0\h\3\r\1\k\o\n\a\l\b\z\b\u\w\a\h\z\8\4\k\p\u\0\z\k\b\l\4\x\8\0\b\s\g\k\q\4\l\8\a\s\s\c\u\7\f\5\d\4\e\7\o\g\c\3\y\8\j\p\z\6\9\c\t\c\q\5\k\d\2\0\b\2\z\d\3\k\x\j\7\9\3\o\2\4\m\x\q\o\h\g\z\p\k\j\3\6\s\z\n\p\h\n\9\f\q\q\i\x\9\c\z\i\q\9\o\a\w\o\x\p\y\t\1\g\n\i\a\e\7\u\d\k\w\p\w\t\8\v\k\t\a\3\u\e\l\j\o\k\t\l\1\v\v\m\y\d\m\7\2\8\u\v\z\i\0\k\m\8\6\p\k\0\a\9\h\q\0\r\q\3\k\0\t\y\8\u\p\p\0\j\g\q\0\2\e\y\r\t\r\4\0\y\t\x\p\r\e\4\u\s\n\s\t\z\f\8\o\j\m\x\f\7\7\3\0\g\4\z\k\8\w\5\5\6\2\x\z\9\s\q\b\s\n\u\1\z\f\5\y\z\q\m\k\a\6\f\m\f\u\3\x\z\i\a\8\6\f\s\l\2\m\a\d\f\1\h\z\8\3\5\y\c\l\x\p\e\h\5\l\o\b\j\5\e\o\d\3\b\l\l\g\z\x\3\f\w\9\u\f\5\z\c\t\0\u\g\a\a\1\l\c\k\8\8\i\1\u\m\d\7\7\o\4\b\t\5\v\1\j\6\b\y\g\b\g\u\9\7\7\5\z\1\g\7\8\7\c\b\k\q\o\7\v\i\2\3\a\y\h\n\f\f\8\b\a\w\n\s\1\4\i\y\y\d\o\s\x\r\w\7\2\j\o\l\i\r\e\9\y\r\z\u\0\7\j\w\p\x\l\w\w\f\1\m\7\n\l\2\x\s\n\f\4\u\x\1\b\s\e\2\u\r\o\x\n\i\c\e\0\1\d\a\1\f\b\2\3\y\9\i\z\a\2\t\e\n\y\5\j\s\8\q\e\t\m\d\u\h\x\s\g\s\x\e\a\f\j\t\l\z\2\p\k\x\a\9\j\2\q\x\l\5\b\b\d\n\i\g\f\3\8\c\m\y\w\f\f\w\1\r\p\n\h\n\9\4\3\r\s\2\q\0\a\4\m\8\g\q\e\b\x\s\z\v\x\d\q\h\x\4\3\e\w\7\0\b\6\c\d\k\l\r\3\m\s\a\n\h\k\e\y\o\2\4\7\b\2\z\j\y\p\z\y\q\y\s\5\h\e\6\e\v\c\d\8\6\z\q\w\w\6\u\7\n\9\i\0\2\g\f\f\o\6\j\g\o\r\4\8\b\f\s\c\y\j\j\l\q\x\z\z\z\p\y\0\u\z\j\o\e\j\k\4\w\y\i\m\o\r\q\5\e\a\r\k\z\v\4\6\w\f\y\w\q\l\d\b\8\p\d\k\1\9\l\n\1\n\d\t\m\d\b\y\i\t\l\9\a\6\4\8\7\n\n\1\y\6\p\y\g\7\j\h\j\9\p\5\2\x\s\s\n\7\3\d\w\j\z\2\e\v\a\w\z\9\i\3\0\o\m\0\i\u\f\j\t\8\5\z\h\6\f\b\k\s\2\m\1\u\4\w\1\x\z\z\v\g\0\0\t\z\n\h\g\2\0\k\k\2\s\3\m\k\5\y\q\q\t\j\x\r\i\7\u\k\t\x\d\v\j\6\x\j\d\e\h\7\3\d\e\6\u\n\t\g\g\a\g\x\3\l\z\o\r\2\5\w\d\t\w\c\g\9\c\f\z\y\z\f\p\3\i\o\6\v\k\e\z\z\f\8\w\r\a\n\b\w\e\s\g\n\2\v\z\l\x\8\u\2\1\s\2\d\a\5\u\n\5\5\z\f\y\a\e\f\k\d\w\8\o\9\w\o\1\2\d\9\g\r\8\v\3\r\h\a\l\z\n\z\9\x\4\i\f\c\n\8\e\p\3\1\1\p\5\w\v\1\z\h\v\e\y\g\c\4\f\q\e\b\h\2\3\l\8\b\4\n\d\d\b\9\2\p\0\x\k\5\f\a\z\i\e\t\f\n\x\z\3\7\l\y\7\y\a\h\5\q\t\o\z\r\b\z\r\3\s\6\a\b\k\m\s\e\m\i\v\b\s\f\u\4\0\c\t\z\7\6\k\h\n\2\l\r\4\r\m\y\l\6\j\5\b\9\f\c\j\k\6\s\7\z\3\7\3\v\w\s\d\6\f\0\k\4\r\q\y\4\p\w\c\n\t\d\c\g\v\7\f\b\1\2\b\8\2\9\4\l\w\a\9\5\a\w\v\6\t\6\8\h\l\7\x\p\2\a\n\w\a\1\l\o\t\0\w\k\c\a\8\f\8\x\l\j\8\7\i\d\6\x\4\o\t\2\s\j\x\v\y\7\m\2\9\r\u\u\g\b\z\6\r\d\h\b\d\p\i\5\x\b\k\8\8\c\f\8\d\p\l\o\m\e\l\4\7\s\m\l\7\l\e\t\m\z\u\l\b\0\y\d\e\m\n\2\q\n\t\h\l\g\g\q\w\2\9\x\b\p\n\g\c\v\r\n\x\1\4\x\g\j\u\s\g\4\k\r\d\4\p\r\g\k\y\l\r\m\w\9\5\1\i\2\j\9\4\0\w\l\3\l\e\4\o\u\b\y\d\b\o\b\m\r\v\e\j\h\f\8\6\c\w\z\z\g\7\d\t\t\7\1\e\y\k\r\8\i\2\8\1\c\h\y\q\r\6\4\v\i\2\7\0\h\k\o\l\7\w\d\9\z\u\q\b\9\j\m\i\a\r\q\3\a\2\l\6\6\s\y\5\j\l\2\x\r\r\x\a\q\7\h\b\g\r\f\c\8\d\u\b\f\p\v\s\t\0\c\q\4\9\h\8\5\b\3\b\k\z\6\3\3\g\3\b\o\n\w\m\b\k\f\1\6\a\e\t\l\5\3\b\i\n\y\u\p\o\v\2\y\f\n\z\x\o\j\s\e\2\5\4\k\d\t\9\j\f\f\1\r\0\o\0\t\m\e\k\t\o\5\2\5\e\q\z\u\h\t\y\8\0\f\7\w\6\m\3\9\l\y\3\y\l\t\j\a\4\g\o\r\j\n\e\w\s\p\o\l\u\k\f\t\6\5\f\v\4\9\5\v\8\q\5\j\5\8\a\6\r\q\u\d\y\c\r\d\k\x\r\5\2\2\e\p\y\9\q\g\l\6\k\9\5\a\a\t\c\m\o\j\1\d\p\t\2\e\9\i\l\1\v\4\2\n\d\f\a\c\c\d\2\c\3\7\m\1\s\n\i\v\k\t\9\m\9\i\s\n\b\3\3\6\9\x\k\y\i\m\3\q\l\r\c\v\8\x\j\2\3\e\o\2\7\r\4\k\p\1\g\s\8\z\r\k\x\1\i\3\c\p\x\n\p\g\j\w\q\n\b\z\e\8\k\e\f\n\2\6\a\j\n\9\z\6\w\l\l\m\t\m\a\k\o\c\d\p\j\w\6\2\b\0\c\a\w\i\d\4\v\7\r\a\n\p\v\f\r\u\5\e\d\t\r\j\m\s\3\1\b\u\2\8\g\i\b\3\k\i\9\p\x\8\8\q\a\r\h\0\v\x\a\q\q\s\j\2 ]] 00:06:26.698 ************************************ 00:06:26.698 00:06:26.698 real 0m1.304s 00:06:26.698 user 0m0.873s 00:06:26.698 sys 0m0.647s 00:06:26.698 09:20:51 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.698 09:20:51 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:26.698 END TEST dd_rw_offset 00:06:26.698 ************************************ 00:06:26.958 09:20:51 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:06:26.958 09:20:51 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:06:26.958 09:20:51 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:26.958 09:20:51 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:26.958 09:20:51 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:06:26.958 09:20:51 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:26.958 09:20:51 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:06:26.958 09:20:51 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:26.958 09:20:51 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:06:26.958 09:20:51 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:26.958 09:20:51 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:26.958 [2024-10-16 09:20:51.165356] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:26.958 [2024-10-16 09:20:51.165455] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60149 ] 00:06:26.958 { 00:06:26.958 "subsystems": [ 00:06:26.958 { 00:06:26.958 "subsystem": "bdev", 00:06:26.958 "config": [ 00:06:26.958 { 00:06:26.958 "params": { 00:06:26.958 "trtype": "pcie", 00:06:26.958 "traddr": "0000:00:10.0", 00:06:26.958 "name": "Nvme0" 00:06:26.958 }, 00:06:26.958 "method": "bdev_nvme_attach_controller" 00:06:26.958 }, 00:06:26.958 { 00:06:26.958 "method": "bdev_wait_for_examine" 00:06:26.958 } 00:06:26.958 ] 00:06:26.958 } 00:06:26.958 ] 00:06:26.958 } 00:06:26.958 [2024-10-16 09:20:51.301657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.958 [2024-10-16 09:20:51.341482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.217 [2024-10-16 09:20:51.396462] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:27.217  [2024-10-16T09:20:51.880Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:27.476 00:06:27.476 09:20:51 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:27.476 00:06:27.476 real 0m17.211s 00:06:27.476 user 0m12.142s 00:06:27.476 sys 0m6.909s 00:06:27.476 09:20:51 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:27.476 09:20:51 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:27.476 ************************************ 00:06:27.476 END TEST spdk_dd_basic_rw 00:06:27.476 ************************************ 00:06:27.476 09:20:51 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:27.476 09:20:51 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:27.476 09:20:51 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:27.476 09:20:51 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:27.476 ************************************ 00:06:27.476 START TEST spdk_dd_posix 00:06:27.476 ************************************ 00:06:27.476 09:20:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:27.476 * Looking for test storage... 00:06:27.476 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:27.476 09:20:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:27.476 09:20:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1691 -- # lcov --version 00:06:27.476 09:20:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:27.736 09:20:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:27.736 09:20:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:27.736 09:20:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:27.736 09:20:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:27.736 09:20:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.736 09:20:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:06:27.736 09:20:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:06:27.736 09:20:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:06:27.736 09:20:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:06:27.736 09:20:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:06:27.736 09:20:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:06:27.736 09:20:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:27.736 09:20:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:06:27.736 09:20:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:06:27.736 09:20:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:27.736 09:20:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.736 09:20:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:06:27.736 09:20:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:06:27.736 09:20:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:27.736 09:20:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:06:27.736 09:20:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:06:27.736 09:20:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:06:27.736 09:20:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:06:27.736 09:20:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:27.736 09:20:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:06:27.736 09:20:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:06:27.736 09:20:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:27.736 09:20:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:27.736 09:20:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:06:27.736 09:20:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.736 09:20:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:27.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.736 --rc genhtml_branch_coverage=1 00:06:27.736 --rc genhtml_function_coverage=1 00:06:27.736 --rc genhtml_legend=1 00:06:27.736 --rc geninfo_all_blocks=1 00:06:27.736 --rc geninfo_unexecuted_blocks=1 00:06:27.736 00:06:27.736 ' 00:06:27.736 09:20:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:27.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.736 --rc genhtml_branch_coverage=1 00:06:27.736 --rc genhtml_function_coverage=1 00:06:27.736 --rc genhtml_legend=1 00:06:27.736 --rc geninfo_all_blocks=1 00:06:27.736 --rc geninfo_unexecuted_blocks=1 00:06:27.736 00:06:27.736 ' 00:06:27.736 09:20:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:27.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.736 --rc genhtml_branch_coverage=1 00:06:27.736 --rc genhtml_function_coverage=1 00:06:27.736 --rc genhtml_legend=1 00:06:27.736 --rc geninfo_all_blocks=1 00:06:27.736 --rc geninfo_unexecuted_blocks=1 00:06:27.736 00:06:27.736 ' 00:06:27.736 09:20:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:27.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.736 --rc genhtml_branch_coverage=1 00:06:27.736 --rc genhtml_function_coverage=1 00:06:27.736 --rc genhtml_legend=1 00:06:27.736 --rc geninfo_all_blocks=1 00:06:27.736 --rc geninfo_unexecuted_blocks=1 00:06:27.736 00:06:27.736 ' 00:06:27.736 09:20:51 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:27.736 09:20:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:06:27.736 09:20:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:27.736 09:20:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:27.736 09:20:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:27.737 09:20:51 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.737 09:20:51 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.737 09:20:51 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.737 09:20:51 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:06:27.737 09:20:51 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.737 09:20:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:06:27.737 09:20:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:06:27.737 09:20:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:06:27.737 09:20:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:06:27.737 09:20:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:27.737 09:20:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:27.737 09:20:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:06:27.737 09:20:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:06:27.737 * First test run, liburing in use 00:06:27.737 09:20:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:06:27.737 09:20:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:27.737 09:20:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:27.737 09:20:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:27.737 ************************************ 00:06:27.737 START TEST dd_flag_append 00:06:27.737 ************************************ 00:06:27.737 09:20:51 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1125 -- # append 00:06:27.737 09:20:51 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:06:27.737 09:20:51 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:06:27.737 09:20:51 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:06:27.737 09:20:51 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:27.737 09:20:51 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:27.737 09:20:51 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=9opjzrqjhnj9mhkd7zdoaey2vgebj6ag 00:06:27.737 09:20:51 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:06:27.737 09:20:51 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:27.737 09:20:51 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:27.737 09:20:51 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=8b05sfqzkuuwr8wzh3s49kr8wtlsujnj 00:06:27.737 09:20:51 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s 9opjzrqjhnj9mhkd7zdoaey2vgebj6ag 00:06:27.737 09:20:51 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s 8b05sfqzkuuwr8wzh3s49kr8wtlsujnj 00:06:27.737 09:20:51 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:27.737 [2024-10-16 09:20:52.043289] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:27.737 [2024-10-16 09:20:52.043465] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60223 ] 00:06:27.996 [2024-10-16 09:20:52.193186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.996 [2024-10-16 09:20:52.262633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.996 [2024-10-16 09:20:52.321348] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:27.996  [2024-10-16T09:20:52.659Z] Copying: 32/32 [B] (average 31 kBps) 00:06:28.255 00:06:28.255 09:20:52 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ 8b05sfqzkuuwr8wzh3s49kr8wtlsujnj9opjzrqjhnj9mhkd7zdoaey2vgebj6ag == \8\b\0\5\s\f\q\z\k\u\u\w\r\8\w\z\h\3\s\4\9\k\r\8\w\t\l\s\u\j\n\j\9\o\p\j\z\r\q\j\h\n\j\9\m\h\k\d\7\z\d\o\a\e\y\2\v\g\e\b\j\6\a\g ]] 00:06:28.255 00:06:28.255 real 0m0.603s 00:06:28.255 user 0m0.323s 00:06:28.255 sys 0m0.307s 00:06:28.255 09:20:52 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:28.255 ************************************ 00:06:28.255 END TEST dd_flag_append 00:06:28.255 ************************************ 00:06:28.255 09:20:52 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:28.255 09:20:52 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:06:28.255 09:20:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:28.255 09:20:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:28.255 09:20:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:28.255 ************************************ 00:06:28.255 START TEST dd_flag_directory 00:06:28.255 ************************************ 00:06:28.255 09:20:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1125 -- # directory 00:06:28.255 09:20:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:28.255 09:20:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:06:28.255 09:20:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:28.255 09:20:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:28.255 09:20:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:28.255 09:20:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:28.255 09:20:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:28.255 09:20:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:28.255 09:20:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:28.255 09:20:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:28.255 09:20:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:28.255 09:20:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:28.515 [2024-10-16 09:20:52.672940] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:28.515 [2024-10-16 09:20:52.673052] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60257 ] 00:06:28.515 [2024-10-16 09:20:52.811808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.515 [2024-10-16 09:20:52.880457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.774 [2024-10-16 09:20:52.939364] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:28.774 [2024-10-16 09:20:52.978275] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:28.774 [2024-10-16 09:20:52.978344] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:28.774 [2024-10-16 09:20:52.978360] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:28.774 [2024-10-16 09:20:53.100471] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:28.774 09:20:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:06:28.774 09:20:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:28.774 09:20:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:06:28.774 09:20:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:06:28.774 09:20:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:06:28.774 09:20:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:28.774 09:20:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:28.774 09:20:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:06:28.774 09:20:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:28.774 09:20:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:28.774 09:20:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:28.774 09:20:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:28.774 09:20:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:28.774 09:20:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:28.774 09:20:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:28.774 09:20:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:28.774 09:20:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:28.774 09:20:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:29.033 [2024-10-16 09:20:53.228484] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:29.033 [2024-10-16 09:20:53.228613] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60261 ] 00:06:29.033 [2024-10-16 09:20:53.365258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.033 [2024-10-16 09:20:53.420114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.292 [2024-10-16 09:20:53.475617] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:29.292 [2024-10-16 09:20:53.513153] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:29.292 [2024-10-16 09:20:53.513211] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:29.292 [2024-10-16 09:20:53.513226] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:29.292 [2024-10-16 09:20:53.631204] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:29.292 09:20:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:06:29.292 09:20:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:29.551 09:20:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:06:29.551 09:20:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:06:29.551 09:20:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:06:29.551 09:20:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:29.551 00:06:29.551 real 0m1.086s 00:06:29.551 user 0m0.578s 00:06:29.551 sys 0m0.297s 00:06:29.551 ************************************ 00:06:29.551 END TEST dd_flag_directory 00:06:29.551 ************************************ 00:06:29.551 09:20:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:29.551 09:20:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:06:29.551 09:20:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:06:29.551 09:20:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:29.551 09:20:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:29.551 09:20:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:29.551 ************************************ 00:06:29.551 START TEST dd_flag_nofollow 00:06:29.551 ************************************ 00:06:29.551 09:20:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1125 -- # nofollow 00:06:29.551 09:20:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:29.551 09:20:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:29.551 09:20:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:29.551 09:20:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:29.551 09:20:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:29.551 09:20:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:06:29.551 09:20:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:29.551 09:20:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:29.551 09:20:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:29.552 09:20:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:29.552 09:20:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:29.552 09:20:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:29.552 09:20:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:29.552 09:20:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:29.552 09:20:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:29.552 09:20:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:29.552 [2024-10-16 09:20:53.819242] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:29.552 [2024-10-16 09:20:53.819378] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60295 ] 00:06:29.811 [2024-10-16 09:20:53.960746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.811 [2024-10-16 09:20:54.031338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.811 [2024-10-16 09:20:54.089492] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:29.811 [2024-10-16 09:20:54.126809] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:29.811 [2024-10-16 09:20:54.126900] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:29.811 [2024-10-16 09:20:54.126918] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:30.070 [2024-10-16 09:20:54.247727] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:30.070 09:20:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:06:30.070 09:20:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:30.070 09:20:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:06:30.070 09:20:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:06:30.070 09:20:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:06:30.070 09:20:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:30.070 09:20:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:30.070 09:20:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:06:30.070 09:20:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:30.070 09:20:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:30.070 09:20:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:30.070 09:20:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:30.070 09:20:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:30.070 09:20:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:30.070 09:20:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:30.070 09:20:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:30.070 09:20:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:30.070 09:20:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:30.070 [2024-10-16 09:20:54.376339] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:30.070 [2024-10-16 09:20:54.376689] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60306 ] 00:06:30.329 [2024-10-16 09:20:54.516596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.329 [2024-10-16 09:20:54.583049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.329 [2024-10-16 09:20:54.640046] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:30.329 [2024-10-16 09:20:54.676119] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:30.329 [2024-10-16 09:20:54.676213] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:30.329 [2024-10-16 09:20:54.676243] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:30.588 [2024-10-16 09:20:54.793122] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:30.588 09:20:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:06:30.588 09:20:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:30.588 09:20:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:06:30.588 09:20:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:06:30.588 09:20:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:06:30.588 09:20:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:30.588 09:20:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:06:30.588 09:20:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:06:30.588 09:20:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:30.588 09:20:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:30.588 [2024-10-16 09:20:54.914144] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:30.588 [2024-10-16 09:20:54.914452] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60313 ] 00:06:30.847 [2024-10-16 09:20:55.053021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.847 [2024-10-16 09:20:55.112499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.847 [2024-10-16 09:20:55.166837] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:30.847  [2024-10-16T09:20:55.510Z] Copying: 512/512 [B] (average 500 kBps) 00:06:31.106 00:06:31.106 09:20:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ 140740qekxbfbaio8dnqfzhcs38l3ehi3cxh23djfd13edq4ab9gehknqrw6ajuxukabh7zfrrsgxuc4dik827wf8ntc5aysm5vnps8fy5xjd5cp83jc75a5s7s2edt1vnfzaxsplxxp0rco1mw9xhuh48j6p0wxbf6723obbel3zk0btissdt0qkc5gge8i2s3uqbuyuglvmsn3tjjdgso9opjbx6aac9vz8w642z4obc1x499ngytrz7xapndgrtklvef81qx3y3tfx6o5yz4y1g9wa5fffz70qwi87bzausa3jbewdes808kqhqqllgh6m29i948ucpd3iu02lpm4n76s7rql798zc0uyjjf7dtxvzjpdebsuyifhk92juspp883c5zxq3grc34z78yq42c2ob36xiuqpnjbf52ks0yom2ndqubz3oykosna7s0armxfgl5bu5sjj3ib1uj246tiyx3tfbjhmupzav06ider9d6v796vtilduls94 == \1\4\0\7\4\0\q\e\k\x\b\f\b\a\i\o\8\d\n\q\f\z\h\c\s\3\8\l\3\e\h\i\3\c\x\h\2\3\d\j\f\d\1\3\e\d\q\4\a\b\9\g\e\h\k\n\q\r\w\6\a\j\u\x\u\k\a\b\h\7\z\f\r\r\s\g\x\u\c\4\d\i\k\8\2\7\w\f\8\n\t\c\5\a\y\s\m\5\v\n\p\s\8\f\y\5\x\j\d\5\c\p\8\3\j\c\7\5\a\5\s\7\s\2\e\d\t\1\v\n\f\z\a\x\s\p\l\x\x\p\0\r\c\o\1\m\w\9\x\h\u\h\4\8\j\6\p\0\w\x\b\f\6\7\2\3\o\b\b\e\l\3\z\k\0\b\t\i\s\s\d\t\0\q\k\c\5\g\g\e\8\i\2\s\3\u\q\b\u\y\u\g\l\v\m\s\n\3\t\j\j\d\g\s\o\9\o\p\j\b\x\6\a\a\c\9\v\z\8\w\6\4\2\z\4\o\b\c\1\x\4\9\9\n\g\y\t\r\z\7\x\a\p\n\d\g\r\t\k\l\v\e\f\8\1\q\x\3\y\3\t\f\x\6\o\5\y\z\4\y\1\g\9\w\a\5\f\f\f\z\7\0\q\w\i\8\7\b\z\a\u\s\a\3\j\b\e\w\d\e\s\8\0\8\k\q\h\q\q\l\l\g\h\6\m\2\9\i\9\4\8\u\c\p\d\3\i\u\0\2\l\p\m\4\n\7\6\s\7\r\q\l\7\9\8\z\c\0\u\y\j\j\f\7\d\t\x\v\z\j\p\d\e\b\s\u\y\i\f\h\k\9\2\j\u\s\p\p\8\8\3\c\5\z\x\q\3\g\r\c\3\4\z\7\8\y\q\4\2\c\2\o\b\3\6\x\i\u\q\p\n\j\b\f\5\2\k\s\0\y\o\m\2\n\d\q\u\b\z\3\o\y\k\o\s\n\a\7\s\0\a\r\m\x\f\g\l\5\b\u\5\s\j\j\3\i\b\1\u\j\2\4\6\t\i\y\x\3\t\f\b\j\h\m\u\p\z\a\v\0\6\i\d\e\r\9\d\6\v\7\9\6\v\t\i\l\d\u\l\s\9\4 ]] 00:06:31.106 00:06:31.106 real 0m1.635s 00:06:31.106 user 0m0.893s 00:06:31.106 sys 0m0.561s 00:06:31.106 09:20:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:31.106 09:20:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:31.106 ************************************ 00:06:31.106 END TEST dd_flag_nofollow 00:06:31.106 ************************************ 00:06:31.106 09:20:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:06:31.106 09:20:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:31.106 09:20:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:31.106 09:20:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:31.106 ************************************ 00:06:31.106 START TEST dd_flag_noatime 00:06:31.106 ************************************ 00:06:31.106 09:20:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1125 -- # noatime 00:06:31.106 09:20:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:06:31.106 09:20:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:06:31.106 09:20:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:06:31.106 09:20:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:06:31.106 09:20:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:31.106 09:20:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:31.106 09:20:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1729070455 00:06:31.106 09:20:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:31.106 09:20:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1729070455 00:06:31.106 09:20:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:06:32.482 09:20:56 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:32.482 [2024-10-16 09:20:56.516116] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:32.482 [2024-10-16 09:20:56.516216] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60361 ] 00:06:32.482 [2024-10-16 09:20:56.656870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.482 [2024-10-16 09:20:56.716559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.482 [2024-10-16 09:20:56.773435] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:32.482  [2024-10-16T09:20:57.185Z] Copying: 512/512 [B] (average 500 kBps) 00:06:32.781 00:06:32.781 09:20:56 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:32.781 09:20:56 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1729070455 )) 00:06:32.781 09:20:56 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:32.781 09:20:57 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1729070455 )) 00:06:32.781 09:20:57 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:32.781 [2024-10-16 09:20:57.056670] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:32.781 [2024-10-16 09:20:57.056763] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60369 ] 00:06:33.040 [2024-10-16 09:20:57.194341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.040 [2024-10-16 09:20:57.237931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.040 [2024-10-16 09:20:57.294268] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:33.040  [2024-10-16T09:20:57.703Z] Copying: 512/512 [B] (average 500 kBps) 00:06:33.299 00:06:33.299 09:20:57 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:33.299 09:20:57 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1729070457 )) 00:06:33.299 00:06:33.299 real 0m2.079s 00:06:33.299 user 0m0.581s 00:06:33.299 sys 0m0.536s 00:06:33.299 09:20:57 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:33.299 09:20:57 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:33.299 ************************************ 00:06:33.299 END TEST dd_flag_noatime 00:06:33.299 ************************************ 00:06:33.299 09:20:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:06:33.299 09:20:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:33.299 09:20:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:33.299 09:20:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:33.299 ************************************ 00:06:33.299 START TEST dd_flags_misc 00:06:33.299 ************************************ 00:06:33.299 09:20:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1125 -- # io 00:06:33.299 09:20:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:33.299 09:20:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:33.299 09:20:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:33.299 09:20:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:33.299 09:20:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:33.299 09:20:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:33.299 09:20:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:33.299 09:20:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:33.299 09:20:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:33.299 [2024-10-16 09:20:57.620917] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:33.299 [2024-10-16 09:20:57.621172] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60403 ] 00:06:33.558 [2024-10-16 09:20:57.753883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.558 [2024-10-16 09:20:57.799714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.558 [2024-10-16 09:20:57.853789] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:33.558  [2024-10-16T09:20:58.221Z] Copying: 512/512 [B] (average 500 kBps) 00:06:33.817 00:06:33.818 09:20:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 4xxrsnp4unvene1yszrzoprkaq4jxkr4g7fblzmt665rf8ae7zu72ykkniajvt2xys9kw8kg10t2n4x2n8k96nxt2o7c1n4pt776tftqki64uqzhlgryd6ez8v79g3g34th103yvlexhjdrajqkiowgxubc1c1byh71nnj50y2proebdp5ciyp7oqy67hqq6gbqmjcruda3tc2izxf16ortyjyn3g2hr18cne73o48lhxjzgdi6re58wd8tygsd7ftrxlo0vjtexahkjx6hnuo5qdxk6o3oc89wgrdd7qfsnf4o1a162zsj1nojjspiy4hn9axy84rfburbuz28m5snbl2as3g5qxp48r5agv8r15azkaioo2ocqxmmqug6aurkm8c1qd0k7o5yyg4t5umb5u8rn2f471s9d3zqmfoex0lb5mq688nq0mrojysnuofgvpsn5xlwnzt2i5lrtww5o9vb003cu0sr3r502vczbmte2veh7gohdsgqm5d1b == \4\x\x\r\s\n\p\4\u\n\v\e\n\e\1\y\s\z\r\z\o\p\r\k\a\q\4\j\x\k\r\4\g\7\f\b\l\z\m\t\6\6\5\r\f\8\a\e\7\z\u\7\2\y\k\k\n\i\a\j\v\t\2\x\y\s\9\k\w\8\k\g\1\0\t\2\n\4\x\2\n\8\k\9\6\n\x\t\2\o\7\c\1\n\4\p\t\7\7\6\t\f\t\q\k\i\6\4\u\q\z\h\l\g\r\y\d\6\e\z\8\v\7\9\g\3\g\3\4\t\h\1\0\3\y\v\l\e\x\h\j\d\r\a\j\q\k\i\o\w\g\x\u\b\c\1\c\1\b\y\h\7\1\n\n\j\5\0\y\2\p\r\o\e\b\d\p\5\c\i\y\p\7\o\q\y\6\7\h\q\q\6\g\b\q\m\j\c\r\u\d\a\3\t\c\2\i\z\x\f\1\6\o\r\t\y\j\y\n\3\g\2\h\r\1\8\c\n\e\7\3\o\4\8\l\h\x\j\z\g\d\i\6\r\e\5\8\w\d\8\t\y\g\s\d\7\f\t\r\x\l\o\0\v\j\t\e\x\a\h\k\j\x\6\h\n\u\o\5\q\d\x\k\6\o\3\o\c\8\9\w\g\r\d\d\7\q\f\s\n\f\4\o\1\a\1\6\2\z\s\j\1\n\o\j\j\s\p\i\y\4\h\n\9\a\x\y\8\4\r\f\b\u\r\b\u\z\2\8\m\5\s\n\b\l\2\a\s\3\g\5\q\x\p\4\8\r\5\a\g\v\8\r\1\5\a\z\k\a\i\o\o\2\o\c\q\x\m\m\q\u\g\6\a\u\r\k\m\8\c\1\q\d\0\k\7\o\5\y\y\g\4\t\5\u\m\b\5\u\8\r\n\2\f\4\7\1\s\9\d\3\z\q\m\f\o\e\x\0\l\b\5\m\q\6\8\8\n\q\0\m\r\o\j\y\s\n\u\o\f\g\v\p\s\n\5\x\l\w\n\z\t\2\i\5\l\r\t\w\w\5\o\9\v\b\0\0\3\c\u\0\s\r\3\r\5\0\2\v\c\z\b\m\t\e\2\v\e\h\7\g\o\h\d\s\g\q\m\5\d\1\b ]] 00:06:33.818 09:20:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:33.818 09:20:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:33.818 [2024-10-16 09:20:58.108450] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:33.818 [2024-10-16 09:20:58.108572] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60407 ] 00:06:34.075 [2024-10-16 09:20:58.243695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.075 [2024-10-16 09:20:58.292746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.075 [2024-10-16 09:20:58.347376] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:34.075  [2024-10-16T09:20:58.736Z] Copying: 512/512 [B] (average 500 kBps) 00:06:34.332 00:06:34.332 09:20:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 4xxrsnp4unvene1yszrzoprkaq4jxkr4g7fblzmt665rf8ae7zu72ykkniajvt2xys9kw8kg10t2n4x2n8k96nxt2o7c1n4pt776tftqki64uqzhlgryd6ez8v79g3g34th103yvlexhjdrajqkiowgxubc1c1byh71nnj50y2proebdp5ciyp7oqy67hqq6gbqmjcruda3tc2izxf16ortyjyn3g2hr18cne73o48lhxjzgdi6re58wd8tygsd7ftrxlo0vjtexahkjx6hnuo5qdxk6o3oc89wgrdd7qfsnf4o1a162zsj1nojjspiy4hn9axy84rfburbuz28m5snbl2as3g5qxp48r5agv8r15azkaioo2ocqxmmqug6aurkm8c1qd0k7o5yyg4t5umb5u8rn2f471s9d3zqmfoex0lb5mq688nq0mrojysnuofgvpsn5xlwnzt2i5lrtww5o9vb003cu0sr3r502vczbmte2veh7gohdsgqm5d1b == \4\x\x\r\s\n\p\4\u\n\v\e\n\e\1\y\s\z\r\z\o\p\r\k\a\q\4\j\x\k\r\4\g\7\f\b\l\z\m\t\6\6\5\r\f\8\a\e\7\z\u\7\2\y\k\k\n\i\a\j\v\t\2\x\y\s\9\k\w\8\k\g\1\0\t\2\n\4\x\2\n\8\k\9\6\n\x\t\2\o\7\c\1\n\4\p\t\7\7\6\t\f\t\q\k\i\6\4\u\q\z\h\l\g\r\y\d\6\e\z\8\v\7\9\g\3\g\3\4\t\h\1\0\3\y\v\l\e\x\h\j\d\r\a\j\q\k\i\o\w\g\x\u\b\c\1\c\1\b\y\h\7\1\n\n\j\5\0\y\2\p\r\o\e\b\d\p\5\c\i\y\p\7\o\q\y\6\7\h\q\q\6\g\b\q\m\j\c\r\u\d\a\3\t\c\2\i\z\x\f\1\6\o\r\t\y\j\y\n\3\g\2\h\r\1\8\c\n\e\7\3\o\4\8\l\h\x\j\z\g\d\i\6\r\e\5\8\w\d\8\t\y\g\s\d\7\f\t\r\x\l\o\0\v\j\t\e\x\a\h\k\j\x\6\h\n\u\o\5\q\d\x\k\6\o\3\o\c\8\9\w\g\r\d\d\7\q\f\s\n\f\4\o\1\a\1\6\2\z\s\j\1\n\o\j\j\s\p\i\y\4\h\n\9\a\x\y\8\4\r\f\b\u\r\b\u\z\2\8\m\5\s\n\b\l\2\a\s\3\g\5\q\x\p\4\8\r\5\a\g\v\8\r\1\5\a\z\k\a\i\o\o\2\o\c\q\x\m\m\q\u\g\6\a\u\r\k\m\8\c\1\q\d\0\k\7\o\5\y\y\g\4\t\5\u\m\b\5\u\8\r\n\2\f\4\7\1\s\9\d\3\z\q\m\f\o\e\x\0\l\b\5\m\q\6\8\8\n\q\0\m\r\o\j\y\s\n\u\o\f\g\v\p\s\n\5\x\l\w\n\z\t\2\i\5\l\r\t\w\w\5\o\9\v\b\0\0\3\c\u\0\s\r\3\r\5\0\2\v\c\z\b\m\t\e\2\v\e\h\7\g\o\h\d\s\g\q\m\5\d\1\b ]] 00:06:34.332 09:20:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:34.332 09:20:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:34.332 [2024-10-16 09:20:58.619236] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:34.332 [2024-10-16 09:20:58.619484] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60422 ] 00:06:34.589 [2024-10-16 09:20:58.751752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.589 [2024-10-16 09:20:58.797499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.589 [2024-10-16 09:20:58.849468] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:34.589  [2024-10-16T09:20:59.251Z] Copying: 512/512 [B] (average 125 kBps) 00:06:34.847 00:06:34.847 09:20:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 4xxrsnp4unvene1yszrzoprkaq4jxkr4g7fblzmt665rf8ae7zu72ykkniajvt2xys9kw8kg10t2n4x2n8k96nxt2o7c1n4pt776tftqki64uqzhlgryd6ez8v79g3g34th103yvlexhjdrajqkiowgxubc1c1byh71nnj50y2proebdp5ciyp7oqy67hqq6gbqmjcruda3tc2izxf16ortyjyn3g2hr18cne73o48lhxjzgdi6re58wd8tygsd7ftrxlo0vjtexahkjx6hnuo5qdxk6o3oc89wgrdd7qfsnf4o1a162zsj1nojjspiy4hn9axy84rfburbuz28m5snbl2as3g5qxp48r5agv8r15azkaioo2ocqxmmqug6aurkm8c1qd0k7o5yyg4t5umb5u8rn2f471s9d3zqmfoex0lb5mq688nq0mrojysnuofgvpsn5xlwnzt2i5lrtww5o9vb003cu0sr3r502vczbmte2veh7gohdsgqm5d1b == \4\x\x\r\s\n\p\4\u\n\v\e\n\e\1\y\s\z\r\z\o\p\r\k\a\q\4\j\x\k\r\4\g\7\f\b\l\z\m\t\6\6\5\r\f\8\a\e\7\z\u\7\2\y\k\k\n\i\a\j\v\t\2\x\y\s\9\k\w\8\k\g\1\0\t\2\n\4\x\2\n\8\k\9\6\n\x\t\2\o\7\c\1\n\4\p\t\7\7\6\t\f\t\q\k\i\6\4\u\q\z\h\l\g\r\y\d\6\e\z\8\v\7\9\g\3\g\3\4\t\h\1\0\3\y\v\l\e\x\h\j\d\r\a\j\q\k\i\o\w\g\x\u\b\c\1\c\1\b\y\h\7\1\n\n\j\5\0\y\2\p\r\o\e\b\d\p\5\c\i\y\p\7\o\q\y\6\7\h\q\q\6\g\b\q\m\j\c\r\u\d\a\3\t\c\2\i\z\x\f\1\6\o\r\t\y\j\y\n\3\g\2\h\r\1\8\c\n\e\7\3\o\4\8\l\h\x\j\z\g\d\i\6\r\e\5\8\w\d\8\t\y\g\s\d\7\f\t\r\x\l\o\0\v\j\t\e\x\a\h\k\j\x\6\h\n\u\o\5\q\d\x\k\6\o\3\o\c\8\9\w\g\r\d\d\7\q\f\s\n\f\4\o\1\a\1\6\2\z\s\j\1\n\o\j\j\s\p\i\y\4\h\n\9\a\x\y\8\4\r\f\b\u\r\b\u\z\2\8\m\5\s\n\b\l\2\a\s\3\g\5\q\x\p\4\8\r\5\a\g\v\8\r\1\5\a\z\k\a\i\o\o\2\o\c\q\x\m\m\q\u\g\6\a\u\r\k\m\8\c\1\q\d\0\k\7\o\5\y\y\g\4\t\5\u\m\b\5\u\8\r\n\2\f\4\7\1\s\9\d\3\z\q\m\f\o\e\x\0\l\b\5\m\q\6\8\8\n\q\0\m\r\o\j\y\s\n\u\o\f\g\v\p\s\n\5\x\l\w\n\z\t\2\i\5\l\r\t\w\w\5\o\9\v\b\0\0\3\c\u\0\s\r\3\r\5\0\2\v\c\z\b\m\t\e\2\v\e\h\7\g\o\h\d\s\g\q\m\5\d\1\b ]] 00:06:34.847 09:20:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:34.847 09:20:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:34.847 [2024-10-16 09:20:59.102450] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:34.847 [2024-10-16 09:20:59.102533] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60426 ] 00:06:34.847 [2024-10-16 09:20:59.234126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.105 [2024-10-16 09:20:59.285763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.105 [2024-10-16 09:20:59.342464] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:35.105  [2024-10-16T09:20:59.769Z] Copying: 512/512 [B] (average 500 kBps) 00:06:35.365 00:06:35.365 09:20:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 4xxrsnp4unvene1yszrzoprkaq4jxkr4g7fblzmt665rf8ae7zu72ykkniajvt2xys9kw8kg10t2n4x2n8k96nxt2o7c1n4pt776tftqki64uqzhlgryd6ez8v79g3g34th103yvlexhjdrajqkiowgxubc1c1byh71nnj50y2proebdp5ciyp7oqy67hqq6gbqmjcruda3tc2izxf16ortyjyn3g2hr18cne73o48lhxjzgdi6re58wd8tygsd7ftrxlo0vjtexahkjx6hnuo5qdxk6o3oc89wgrdd7qfsnf4o1a162zsj1nojjspiy4hn9axy84rfburbuz28m5snbl2as3g5qxp48r5agv8r15azkaioo2ocqxmmqug6aurkm8c1qd0k7o5yyg4t5umb5u8rn2f471s9d3zqmfoex0lb5mq688nq0mrojysnuofgvpsn5xlwnzt2i5lrtww5o9vb003cu0sr3r502vczbmte2veh7gohdsgqm5d1b == \4\x\x\r\s\n\p\4\u\n\v\e\n\e\1\y\s\z\r\z\o\p\r\k\a\q\4\j\x\k\r\4\g\7\f\b\l\z\m\t\6\6\5\r\f\8\a\e\7\z\u\7\2\y\k\k\n\i\a\j\v\t\2\x\y\s\9\k\w\8\k\g\1\0\t\2\n\4\x\2\n\8\k\9\6\n\x\t\2\o\7\c\1\n\4\p\t\7\7\6\t\f\t\q\k\i\6\4\u\q\z\h\l\g\r\y\d\6\e\z\8\v\7\9\g\3\g\3\4\t\h\1\0\3\y\v\l\e\x\h\j\d\r\a\j\q\k\i\o\w\g\x\u\b\c\1\c\1\b\y\h\7\1\n\n\j\5\0\y\2\p\r\o\e\b\d\p\5\c\i\y\p\7\o\q\y\6\7\h\q\q\6\g\b\q\m\j\c\r\u\d\a\3\t\c\2\i\z\x\f\1\6\o\r\t\y\j\y\n\3\g\2\h\r\1\8\c\n\e\7\3\o\4\8\l\h\x\j\z\g\d\i\6\r\e\5\8\w\d\8\t\y\g\s\d\7\f\t\r\x\l\o\0\v\j\t\e\x\a\h\k\j\x\6\h\n\u\o\5\q\d\x\k\6\o\3\o\c\8\9\w\g\r\d\d\7\q\f\s\n\f\4\o\1\a\1\6\2\z\s\j\1\n\o\j\j\s\p\i\y\4\h\n\9\a\x\y\8\4\r\f\b\u\r\b\u\z\2\8\m\5\s\n\b\l\2\a\s\3\g\5\q\x\p\4\8\r\5\a\g\v\8\r\1\5\a\z\k\a\i\o\o\2\o\c\q\x\m\m\q\u\g\6\a\u\r\k\m\8\c\1\q\d\0\k\7\o\5\y\y\g\4\t\5\u\m\b\5\u\8\r\n\2\f\4\7\1\s\9\d\3\z\q\m\f\o\e\x\0\l\b\5\m\q\6\8\8\n\q\0\m\r\o\j\y\s\n\u\o\f\g\v\p\s\n\5\x\l\w\n\z\t\2\i\5\l\r\t\w\w\5\o\9\v\b\0\0\3\c\u\0\s\r\3\r\5\0\2\v\c\z\b\m\t\e\2\v\e\h\7\g\o\h\d\s\g\q\m\5\d\1\b ]] 00:06:35.365 09:20:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:35.365 09:20:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:35.365 09:20:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:35.365 09:20:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:35.365 09:20:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:35.365 09:20:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:35.365 [2024-10-16 09:20:59.601158] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:35.365 [2024-10-16 09:20:59.601246] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60437 ] 00:06:35.365 [2024-10-16 09:20:59.730102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.624 [2024-10-16 09:20:59.773914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.624 [2024-10-16 09:20:59.831140] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:35.624  [2024-10-16T09:21:00.287Z] Copying: 512/512 [B] (average 500 kBps) 00:06:35.883 00:06:35.883 09:21:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 8x5zskjntmqlo7p9wp12hvexqkd85fn9l4c8yy3kxm0yjizt686nuz9fs0n29k5y5dboci8cskzu5qehoirpiync8n5feo5xv7evd2qf1fs8suex8t9mof1jr8ap6mflnlolhdenm6d2lcfdnlolaowtx1xf39s3pp0gdkd74ha2plnti4funexc8xtp5nv60c3zqzm92cpamqlv98ogbnfsipnwjb9uwidvbndx1hoz2pcw4h32r45sp7yw5kkxfqmulr379lanrxg5l0erfuj481lhl6pujhd2n7h94a3f88bc9u7kopmbhb5uiriqbr34abub2359ddiagcuzhpcupue33o4zogsui70yac2is65kz8j4dz65cair62uqrltpdi4az6idide55qngolwl3db59b3vpvyoitezukhqppk74qte51cmdv8iqqwq0hy3bxigyntfikecgh7wwyz0eumb8jvc7ha9ikpo6msgmlmywpy6su702bfzvtys == \8\x\5\z\s\k\j\n\t\m\q\l\o\7\p\9\w\p\1\2\h\v\e\x\q\k\d\8\5\f\n\9\l\4\c\8\y\y\3\k\x\m\0\y\j\i\z\t\6\8\6\n\u\z\9\f\s\0\n\2\9\k\5\y\5\d\b\o\c\i\8\c\s\k\z\u\5\q\e\h\o\i\r\p\i\y\n\c\8\n\5\f\e\o\5\x\v\7\e\v\d\2\q\f\1\f\s\8\s\u\e\x\8\t\9\m\o\f\1\j\r\8\a\p\6\m\f\l\n\l\o\l\h\d\e\n\m\6\d\2\l\c\f\d\n\l\o\l\a\o\w\t\x\1\x\f\3\9\s\3\p\p\0\g\d\k\d\7\4\h\a\2\p\l\n\t\i\4\f\u\n\e\x\c\8\x\t\p\5\n\v\6\0\c\3\z\q\z\m\9\2\c\p\a\m\q\l\v\9\8\o\g\b\n\f\s\i\p\n\w\j\b\9\u\w\i\d\v\b\n\d\x\1\h\o\z\2\p\c\w\4\h\3\2\r\4\5\s\p\7\y\w\5\k\k\x\f\q\m\u\l\r\3\7\9\l\a\n\r\x\g\5\l\0\e\r\f\u\j\4\8\1\l\h\l\6\p\u\j\h\d\2\n\7\h\9\4\a\3\f\8\8\b\c\9\u\7\k\o\p\m\b\h\b\5\u\i\r\i\q\b\r\3\4\a\b\u\b\2\3\5\9\d\d\i\a\g\c\u\z\h\p\c\u\p\u\e\3\3\o\4\z\o\g\s\u\i\7\0\y\a\c\2\i\s\6\5\k\z\8\j\4\d\z\6\5\c\a\i\r\6\2\u\q\r\l\t\p\d\i\4\a\z\6\i\d\i\d\e\5\5\q\n\g\o\l\w\l\3\d\b\5\9\b\3\v\p\v\y\o\i\t\e\z\u\k\h\q\p\p\k\7\4\q\t\e\5\1\c\m\d\v\8\i\q\q\w\q\0\h\y\3\b\x\i\g\y\n\t\f\i\k\e\c\g\h\7\w\w\y\z\0\e\u\m\b\8\j\v\c\7\h\a\9\i\k\p\o\6\m\s\g\m\l\m\y\w\p\y\6\s\u\7\0\2\b\f\z\v\t\y\s ]] 00:06:35.883 09:21:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:35.883 09:21:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:35.883 [2024-10-16 09:21:00.082823] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:35.883 [2024-10-16 09:21:00.083058] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60445 ] 00:06:35.883 [2024-10-16 09:21:00.215891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.883 [2024-10-16 09:21:00.257756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.142 [2024-10-16 09:21:00.310094] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:36.142  [2024-10-16T09:21:00.546Z] Copying: 512/512 [B] (average 500 kBps) 00:06:36.142 00:06:36.142 09:21:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 8x5zskjntmqlo7p9wp12hvexqkd85fn9l4c8yy3kxm0yjizt686nuz9fs0n29k5y5dboci8cskzu5qehoirpiync8n5feo5xv7evd2qf1fs8suex8t9mof1jr8ap6mflnlolhdenm6d2lcfdnlolaowtx1xf39s3pp0gdkd74ha2plnti4funexc8xtp5nv60c3zqzm92cpamqlv98ogbnfsipnwjb9uwidvbndx1hoz2pcw4h32r45sp7yw5kkxfqmulr379lanrxg5l0erfuj481lhl6pujhd2n7h94a3f88bc9u7kopmbhb5uiriqbr34abub2359ddiagcuzhpcupue33o4zogsui70yac2is65kz8j4dz65cair62uqrltpdi4az6idide55qngolwl3db59b3vpvyoitezukhqppk74qte51cmdv8iqqwq0hy3bxigyntfikecgh7wwyz0eumb8jvc7ha9ikpo6msgmlmywpy6su702bfzvtys == \8\x\5\z\s\k\j\n\t\m\q\l\o\7\p\9\w\p\1\2\h\v\e\x\q\k\d\8\5\f\n\9\l\4\c\8\y\y\3\k\x\m\0\y\j\i\z\t\6\8\6\n\u\z\9\f\s\0\n\2\9\k\5\y\5\d\b\o\c\i\8\c\s\k\z\u\5\q\e\h\o\i\r\p\i\y\n\c\8\n\5\f\e\o\5\x\v\7\e\v\d\2\q\f\1\f\s\8\s\u\e\x\8\t\9\m\o\f\1\j\r\8\a\p\6\m\f\l\n\l\o\l\h\d\e\n\m\6\d\2\l\c\f\d\n\l\o\l\a\o\w\t\x\1\x\f\3\9\s\3\p\p\0\g\d\k\d\7\4\h\a\2\p\l\n\t\i\4\f\u\n\e\x\c\8\x\t\p\5\n\v\6\0\c\3\z\q\z\m\9\2\c\p\a\m\q\l\v\9\8\o\g\b\n\f\s\i\p\n\w\j\b\9\u\w\i\d\v\b\n\d\x\1\h\o\z\2\p\c\w\4\h\3\2\r\4\5\s\p\7\y\w\5\k\k\x\f\q\m\u\l\r\3\7\9\l\a\n\r\x\g\5\l\0\e\r\f\u\j\4\8\1\l\h\l\6\p\u\j\h\d\2\n\7\h\9\4\a\3\f\8\8\b\c\9\u\7\k\o\p\m\b\h\b\5\u\i\r\i\q\b\r\3\4\a\b\u\b\2\3\5\9\d\d\i\a\g\c\u\z\h\p\c\u\p\u\e\3\3\o\4\z\o\g\s\u\i\7\0\y\a\c\2\i\s\6\5\k\z\8\j\4\d\z\6\5\c\a\i\r\6\2\u\q\r\l\t\p\d\i\4\a\z\6\i\d\i\d\e\5\5\q\n\g\o\l\w\l\3\d\b\5\9\b\3\v\p\v\y\o\i\t\e\z\u\k\h\q\p\p\k\7\4\q\t\e\5\1\c\m\d\v\8\i\q\q\w\q\0\h\y\3\b\x\i\g\y\n\t\f\i\k\e\c\g\h\7\w\w\y\z\0\e\u\m\b\8\j\v\c\7\h\a\9\i\k\p\o\6\m\s\g\m\l\m\y\w\p\y\6\s\u\7\0\2\b\f\z\v\t\y\s ]] 00:06:36.142 09:21:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:36.142 09:21:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:36.401 [2024-10-16 09:21:00.571283] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:36.401 [2024-10-16 09:21:00.571377] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60457 ] 00:06:36.401 [2024-10-16 09:21:00.708899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.401 [2024-10-16 09:21:00.758720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.660 [2024-10-16 09:21:00.812155] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:36.660  [2024-10-16T09:21:01.064Z] Copying: 512/512 [B] (average 250 kBps) 00:06:36.660 00:06:36.660 09:21:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 8x5zskjntmqlo7p9wp12hvexqkd85fn9l4c8yy3kxm0yjizt686nuz9fs0n29k5y5dboci8cskzu5qehoirpiync8n5feo5xv7evd2qf1fs8suex8t9mof1jr8ap6mflnlolhdenm6d2lcfdnlolaowtx1xf39s3pp0gdkd74ha2plnti4funexc8xtp5nv60c3zqzm92cpamqlv98ogbnfsipnwjb9uwidvbndx1hoz2pcw4h32r45sp7yw5kkxfqmulr379lanrxg5l0erfuj481lhl6pujhd2n7h94a3f88bc9u7kopmbhb5uiriqbr34abub2359ddiagcuzhpcupue33o4zogsui70yac2is65kz8j4dz65cair62uqrltpdi4az6idide55qngolwl3db59b3vpvyoitezukhqppk74qte51cmdv8iqqwq0hy3bxigyntfikecgh7wwyz0eumb8jvc7ha9ikpo6msgmlmywpy6su702bfzvtys == \8\x\5\z\s\k\j\n\t\m\q\l\o\7\p\9\w\p\1\2\h\v\e\x\q\k\d\8\5\f\n\9\l\4\c\8\y\y\3\k\x\m\0\y\j\i\z\t\6\8\6\n\u\z\9\f\s\0\n\2\9\k\5\y\5\d\b\o\c\i\8\c\s\k\z\u\5\q\e\h\o\i\r\p\i\y\n\c\8\n\5\f\e\o\5\x\v\7\e\v\d\2\q\f\1\f\s\8\s\u\e\x\8\t\9\m\o\f\1\j\r\8\a\p\6\m\f\l\n\l\o\l\h\d\e\n\m\6\d\2\l\c\f\d\n\l\o\l\a\o\w\t\x\1\x\f\3\9\s\3\p\p\0\g\d\k\d\7\4\h\a\2\p\l\n\t\i\4\f\u\n\e\x\c\8\x\t\p\5\n\v\6\0\c\3\z\q\z\m\9\2\c\p\a\m\q\l\v\9\8\o\g\b\n\f\s\i\p\n\w\j\b\9\u\w\i\d\v\b\n\d\x\1\h\o\z\2\p\c\w\4\h\3\2\r\4\5\s\p\7\y\w\5\k\k\x\f\q\m\u\l\r\3\7\9\l\a\n\r\x\g\5\l\0\e\r\f\u\j\4\8\1\l\h\l\6\p\u\j\h\d\2\n\7\h\9\4\a\3\f\8\8\b\c\9\u\7\k\o\p\m\b\h\b\5\u\i\r\i\q\b\r\3\4\a\b\u\b\2\3\5\9\d\d\i\a\g\c\u\z\h\p\c\u\p\u\e\3\3\o\4\z\o\g\s\u\i\7\0\y\a\c\2\i\s\6\5\k\z\8\j\4\d\z\6\5\c\a\i\r\6\2\u\q\r\l\t\p\d\i\4\a\z\6\i\d\i\d\e\5\5\q\n\g\o\l\w\l\3\d\b\5\9\b\3\v\p\v\y\o\i\t\e\z\u\k\h\q\p\p\k\7\4\q\t\e\5\1\c\m\d\v\8\i\q\q\w\q\0\h\y\3\b\x\i\g\y\n\t\f\i\k\e\c\g\h\7\w\w\y\z\0\e\u\m\b\8\j\v\c\7\h\a\9\i\k\p\o\6\m\s\g\m\l\m\y\w\p\y\6\s\u\7\0\2\b\f\z\v\t\y\s ]] 00:06:36.660 09:21:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:36.660 09:21:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:36.919 [2024-10-16 09:21:01.068467] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:36.919 [2024-10-16 09:21:01.068810] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60466 ] 00:06:36.919 [2024-10-16 09:21:01.204567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.919 [2024-10-16 09:21:01.246617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.919 [2024-10-16 09:21:01.299957] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:37.179  [2024-10-16T09:21:01.583Z] Copying: 512/512 [B] (average 250 kBps) 00:06:37.179 00:06:37.179 09:21:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 8x5zskjntmqlo7p9wp12hvexqkd85fn9l4c8yy3kxm0yjizt686nuz9fs0n29k5y5dboci8cskzu5qehoirpiync8n5feo5xv7evd2qf1fs8suex8t9mof1jr8ap6mflnlolhdenm6d2lcfdnlolaowtx1xf39s3pp0gdkd74ha2plnti4funexc8xtp5nv60c3zqzm92cpamqlv98ogbnfsipnwjb9uwidvbndx1hoz2pcw4h32r45sp7yw5kkxfqmulr379lanrxg5l0erfuj481lhl6pujhd2n7h94a3f88bc9u7kopmbhb5uiriqbr34abub2359ddiagcuzhpcupue33o4zogsui70yac2is65kz8j4dz65cair62uqrltpdi4az6idide55qngolwl3db59b3vpvyoitezukhqppk74qte51cmdv8iqqwq0hy3bxigyntfikecgh7wwyz0eumb8jvc7ha9ikpo6msgmlmywpy6su702bfzvtys == \8\x\5\z\s\k\j\n\t\m\q\l\o\7\p\9\w\p\1\2\h\v\e\x\q\k\d\8\5\f\n\9\l\4\c\8\y\y\3\k\x\m\0\y\j\i\z\t\6\8\6\n\u\z\9\f\s\0\n\2\9\k\5\y\5\d\b\o\c\i\8\c\s\k\z\u\5\q\e\h\o\i\r\p\i\y\n\c\8\n\5\f\e\o\5\x\v\7\e\v\d\2\q\f\1\f\s\8\s\u\e\x\8\t\9\m\o\f\1\j\r\8\a\p\6\m\f\l\n\l\o\l\h\d\e\n\m\6\d\2\l\c\f\d\n\l\o\l\a\o\w\t\x\1\x\f\3\9\s\3\p\p\0\g\d\k\d\7\4\h\a\2\p\l\n\t\i\4\f\u\n\e\x\c\8\x\t\p\5\n\v\6\0\c\3\z\q\z\m\9\2\c\p\a\m\q\l\v\9\8\o\g\b\n\f\s\i\p\n\w\j\b\9\u\w\i\d\v\b\n\d\x\1\h\o\z\2\p\c\w\4\h\3\2\r\4\5\s\p\7\y\w\5\k\k\x\f\q\m\u\l\r\3\7\9\l\a\n\r\x\g\5\l\0\e\r\f\u\j\4\8\1\l\h\l\6\p\u\j\h\d\2\n\7\h\9\4\a\3\f\8\8\b\c\9\u\7\k\o\p\m\b\h\b\5\u\i\r\i\q\b\r\3\4\a\b\u\b\2\3\5\9\d\d\i\a\g\c\u\z\h\p\c\u\p\u\e\3\3\o\4\z\o\g\s\u\i\7\0\y\a\c\2\i\s\6\5\k\z\8\j\4\d\z\6\5\c\a\i\r\6\2\u\q\r\l\t\p\d\i\4\a\z\6\i\d\i\d\e\5\5\q\n\g\o\l\w\l\3\d\b\5\9\b\3\v\p\v\y\o\i\t\e\z\u\k\h\q\p\p\k\7\4\q\t\e\5\1\c\m\d\v\8\i\q\q\w\q\0\h\y\3\b\x\i\g\y\n\t\f\i\k\e\c\g\h\7\w\w\y\z\0\e\u\m\b\8\j\v\c\7\h\a\9\i\k\p\o\6\m\s\g\m\l\m\y\w\p\y\6\s\u\7\0\2\b\f\z\v\t\y\s ]] 00:06:37.179 00:06:37.179 real 0m3.948s 00:06:37.179 user 0m2.057s 00:06:37.179 sys 0m2.049s 00:06:37.179 09:21:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.179 09:21:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:37.179 ************************************ 00:06:37.179 END TEST dd_flags_misc 00:06:37.179 ************************************ 00:06:37.179 09:21:01 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:06:37.179 09:21:01 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:06:37.179 * Second test run, disabling liburing, forcing AIO 00:06:37.179 09:21:01 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:06:37.179 09:21:01 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:06:37.179 09:21:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:37.179 09:21:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.179 09:21:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:37.179 ************************************ 00:06:37.179 START TEST dd_flag_append_forced_aio 00:06:37.179 ************************************ 00:06:37.179 09:21:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1125 -- # append 00:06:37.179 09:21:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:06:37.179 09:21:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:06:37.179 09:21:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:06:37.179 09:21:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:37.179 09:21:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:37.179 09:21:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=3tuqylgk1phjropj4cqsqpwendqlhtna 00:06:37.179 09:21:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:06:37.179 09:21:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:37.179 09:21:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:37.179 09:21:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=wgbab8ch18t1hy6r95yven8k7qf5e0sm 00:06:37.179 09:21:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s 3tuqylgk1phjropj4cqsqpwendqlhtna 00:06:37.179 09:21:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s wgbab8ch18t1hy6r95yven8k7qf5e0sm 00:06:37.179 09:21:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:37.438 [2024-10-16 09:21:01.628818] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:37.438 [2024-10-16 09:21:01.628935] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60495 ] 00:06:37.438 [2024-10-16 09:21:01.770534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.438 [2024-10-16 09:21:01.829943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.697 [2024-10-16 09:21:01.887938] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:37.697  [2024-10-16T09:21:02.360Z] Copying: 32/32 [B] (average 31 kBps) 00:06:37.956 00:06:37.956 09:21:02 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ wgbab8ch18t1hy6r95yven8k7qf5e0sm3tuqylgk1phjropj4cqsqpwendqlhtna == \w\g\b\a\b\8\c\h\1\8\t\1\h\y\6\r\9\5\y\v\e\n\8\k\7\q\f\5\e\0\s\m\3\t\u\q\y\l\g\k\1\p\h\j\r\o\p\j\4\c\q\s\q\p\w\e\n\d\q\l\h\t\n\a ]] 00:06:37.956 00:06:37.956 real 0m0.568s 00:06:37.956 user 0m0.298s 00:06:37.956 sys 0m0.148s 00:06:37.956 09:21:02 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.956 ************************************ 00:06:37.956 END TEST dd_flag_append_forced_aio 00:06:37.956 ************************************ 00:06:37.956 09:21:02 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:37.956 09:21:02 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:06:37.956 09:21:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:37.956 09:21:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.956 09:21:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:37.956 ************************************ 00:06:37.956 START TEST dd_flag_directory_forced_aio 00:06:37.956 ************************************ 00:06:37.956 09:21:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1125 -- # directory 00:06:37.956 09:21:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:37.956 09:21:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:06:37.956 09:21:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:37.956 09:21:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:37.956 09:21:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.956 09:21:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:37.956 09:21:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.956 09:21:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:37.956 09:21:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:37.956 09:21:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:37.956 09:21:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:37.956 09:21:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:37.956 [2024-10-16 09:21:02.245017] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:37.956 [2024-10-16 09:21:02.245110] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60521 ] 00:06:38.215 [2024-10-16 09:21:02.382853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.215 [2024-10-16 09:21:02.428439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.215 [2024-10-16 09:21:02.482885] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:38.215 [2024-10-16 09:21:02.517338] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:38.215 [2024-10-16 09:21:02.517652] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:38.215 [2024-10-16 09:21:02.517671] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:38.474 [2024-10-16 09:21:02.637991] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:38.474 09:21:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:06:38.474 09:21:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:38.474 09:21:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:06:38.474 09:21:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:06:38.474 09:21:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:06:38.474 09:21:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:38.474 09:21:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:38.474 09:21:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:06:38.474 09:21:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:38.474 09:21:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:38.474 09:21:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:38.474 09:21:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:38.474 09:21:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:38.474 09:21:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:38.474 09:21:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:38.474 09:21:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:38.474 09:21:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:38.474 09:21:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:38.474 [2024-10-16 09:21:02.761633] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:38.475 [2024-10-16 09:21:02.761899] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60535 ] 00:06:38.734 [2024-10-16 09:21:02.896571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.734 [2024-10-16 09:21:02.945383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.734 [2024-10-16 09:21:03.000596] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:38.734 [2024-10-16 09:21:03.033922] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:38.734 [2024-10-16 09:21:03.033976] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:38.734 [2024-10-16 09:21:03.034006] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:38.993 [2024-10-16 09:21:03.148806] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:38.993 09:21:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:06:38.993 09:21:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:38.993 09:21:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:06:38.993 09:21:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:06:38.993 09:21:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:06:38.993 09:21:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:38.993 00:06:38.993 real 0m1.022s 00:06:38.993 user 0m0.529s 00:06:38.993 sys 0m0.283s 00:06:38.993 09:21:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.993 09:21:03 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:38.993 ************************************ 00:06:38.993 END TEST dd_flag_directory_forced_aio 00:06:38.993 ************************************ 00:06:38.993 09:21:03 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:06:38.993 09:21:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:38.993 09:21:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.993 09:21:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:38.993 ************************************ 00:06:38.993 START TEST dd_flag_nofollow_forced_aio 00:06:38.993 ************************************ 00:06:38.993 09:21:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1125 -- # nofollow 00:06:38.993 09:21:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:38.993 09:21:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:38.993 09:21:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:38.993 09:21:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:38.993 09:21:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:38.993 09:21:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:06:38.993 09:21:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:38.994 09:21:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:38.994 09:21:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:38.994 09:21:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:38.994 09:21:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:38.994 09:21:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:38.994 09:21:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:38.994 09:21:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:38.994 09:21:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:38.994 09:21:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:38.994 [2024-10-16 09:21:03.328566] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:38.994 [2024-10-16 09:21:03.328669] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60559 ] 00:06:39.253 [2024-10-16 09:21:03.465117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.253 [2024-10-16 09:21:03.504809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.253 [2024-10-16 09:21:03.557886] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:39.253 [2024-10-16 09:21:03.593356] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:39.253 [2024-10-16 09:21:03.593409] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:39.253 [2024-10-16 09:21:03.593439] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:39.512 [2024-10-16 09:21:03.709788] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:39.512 09:21:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:06:39.512 09:21:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:39.512 09:21:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:06:39.512 09:21:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:06:39.512 09:21:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:06:39.512 09:21:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:39.512 09:21:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:39.512 09:21:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:06:39.512 09:21:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:39.512 09:21:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:39.512 09:21:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:39.512 09:21:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:39.513 09:21:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:39.513 09:21:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:39.513 09:21:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:39.513 09:21:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:39.513 09:21:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:39.513 09:21:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:39.513 [2024-10-16 09:21:03.830292] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:39.513 [2024-10-16 09:21:03.830393] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60573 ] 00:06:39.894 [2024-10-16 09:21:03.967450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.894 [2024-10-16 09:21:04.015320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.894 [2024-10-16 09:21:04.070746] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:39.894 [2024-10-16 09:21:04.104120] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:39.894 [2024-10-16 09:21:04.104188] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:39.894 [2024-10-16 09:21:04.104218] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:39.894 [2024-10-16 09:21:04.217574] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:40.153 09:21:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:06:40.153 09:21:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:40.153 09:21:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:06:40.154 09:21:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:06:40.154 09:21:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:06:40.154 09:21:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:40.154 09:21:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:06:40.154 09:21:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:40.154 09:21:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:40.154 09:21:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:40.154 [2024-10-16 09:21:04.341861] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:40.154 [2024-10-16 09:21:04.341967] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60576 ] 00:06:40.154 [2024-10-16 09:21:04.477537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.154 [2024-10-16 09:21:04.518027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.413 [2024-10-16 09:21:04.571707] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:40.413  [2024-10-16T09:21:04.817Z] Copying: 512/512 [B] (average 500 kBps) 00:06:40.413 00:06:40.413 09:21:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ ptcxd6htlokqmb9iq5wipl3o4j2d0kbmpjwzs9mzf02nztvwgklb7wgwp79vaqzaeioyb952f2arr8hxovpj0nms1yroh4a2wd7ts9l9a7e1z7bcbxzt1w8ni7uk96jatwy0tkj6ulbthv4c9vylfektyv579o9dsofmw7t7za6761vki1iv718iloe5vaimpvphiihqqmtavkb93dl739ep455frioc6lh2mip1ynus90tze8b37732yueg2ch1hbawlkces8eo9kzafm6qj9f8zlz1rckdtqbhvruqkou6nvtle255mvgqum7zqq72big7k1710zchcx46l97v23az72v90j9hpoce7phircnlh62f0bp07szjl5cgut7z0373lgyrcbnthnnq9s0ac4hcpagxk73pqzvkag12wl3tsg1su3tju3adh7rukqnjdd3gk77srn1rbhlgb85togqwh4ukto9lndu40cch15tgikt0msbdmjihpdg6jay4 == \p\t\c\x\d\6\h\t\l\o\k\q\m\b\9\i\q\5\w\i\p\l\3\o\4\j\2\d\0\k\b\m\p\j\w\z\s\9\m\z\f\0\2\n\z\t\v\w\g\k\l\b\7\w\g\w\p\7\9\v\a\q\z\a\e\i\o\y\b\9\5\2\f\2\a\r\r\8\h\x\o\v\p\j\0\n\m\s\1\y\r\o\h\4\a\2\w\d\7\t\s\9\l\9\a\7\e\1\z\7\b\c\b\x\z\t\1\w\8\n\i\7\u\k\9\6\j\a\t\w\y\0\t\k\j\6\u\l\b\t\h\v\4\c\9\v\y\l\f\e\k\t\y\v\5\7\9\o\9\d\s\o\f\m\w\7\t\7\z\a\6\7\6\1\v\k\i\1\i\v\7\1\8\i\l\o\e\5\v\a\i\m\p\v\p\h\i\i\h\q\q\m\t\a\v\k\b\9\3\d\l\7\3\9\e\p\4\5\5\f\r\i\o\c\6\l\h\2\m\i\p\1\y\n\u\s\9\0\t\z\e\8\b\3\7\7\3\2\y\u\e\g\2\c\h\1\h\b\a\w\l\k\c\e\s\8\e\o\9\k\z\a\f\m\6\q\j\9\f\8\z\l\z\1\r\c\k\d\t\q\b\h\v\r\u\q\k\o\u\6\n\v\t\l\e\2\5\5\m\v\g\q\u\m\7\z\q\q\7\2\b\i\g\7\k\1\7\1\0\z\c\h\c\x\4\6\l\9\7\v\2\3\a\z\7\2\v\9\0\j\9\h\p\o\c\e\7\p\h\i\r\c\n\l\h\6\2\f\0\b\p\0\7\s\z\j\l\5\c\g\u\t\7\z\0\3\7\3\l\g\y\r\c\b\n\t\h\n\n\q\9\s\0\a\c\4\h\c\p\a\g\x\k\7\3\p\q\z\v\k\a\g\1\2\w\l\3\t\s\g\1\s\u\3\t\j\u\3\a\d\h\7\r\u\k\q\n\j\d\d\3\g\k\7\7\s\r\n\1\r\b\h\l\g\b\8\5\t\o\g\q\w\h\4\u\k\t\o\9\l\n\d\u\4\0\c\c\h\1\5\t\g\i\k\t\0\m\s\b\d\m\j\i\h\p\d\g\6\j\a\y\4 ]] 00:06:40.413 00:06:40.413 real 0m1.539s 00:06:40.413 user 0m0.812s 00:06:40.413 sys 0m0.402s 00:06:40.413 ************************************ 00:06:40.413 END TEST dd_flag_nofollow_forced_aio 00:06:40.413 09:21:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:40.413 09:21:04 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:40.413 ************************************ 00:06:40.671 09:21:04 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:06:40.671 09:21:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:40.671 09:21:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:40.671 09:21:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:40.671 ************************************ 00:06:40.671 START TEST dd_flag_noatime_forced_aio 00:06:40.671 ************************************ 00:06:40.671 09:21:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1125 -- # noatime 00:06:40.671 09:21:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:06:40.671 09:21:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:06:40.671 09:21:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:06:40.671 09:21:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:40.671 09:21:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:40.671 09:21:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:40.671 09:21:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1729070464 00:06:40.671 09:21:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:40.671 09:21:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1729070464 00:06:40.671 09:21:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:06:41.608 09:21:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:41.608 [2024-10-16 09:21:05.929114] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:41.608 [2024-10-16 09:21:05.929226] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60621 ] 00:06:41.868 [2024-10-16 09:21:06.070418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.868 [2024-10-16 09:21:06.121917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.868 [2024-10-16 09:21:06.181533] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:41.868  [2024-10-16T09:21:06.531Z] Copying: 512/512 [B] (average 500 kBps) 00:06:42.127 00:06:42.127 09:21:06 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:42.127 09:21:06 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1729070464 )) 00:06:42.127 09:21:06 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:42.127 09:21:06 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1729070464 )) 00:06:42.127 09:21:06 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:42.127 [2024-10-16 09:21:06.477875] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:42.127 [2024-10-16 09:21:06.477996] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60628 ] 00:06:42.386 [2024-10-16 09:21:06.613445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.386 [2024-10-16 09:21:06.652124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.386 [2024-10-16 09:21:06.704706] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:42.386  [2024-10-16T09:21:07.049Z] Copying: 512/512 [B] (average 500 kBps) 00:06:42.645 00:06:42.645 09:21:06 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:42.645 09:21:06 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1729070466 )) 00:06:42.645 00:06:42.645 real 0m2.074s 00:06:42.645 user 0m0.552s 00:06:42.645 sys 0m0.285s 00:06:42.645 09:21:06 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:42.645 09:21:06 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:42.645 ************************************ 00:06:42.645 END TEST dd_flag_noatime_forced_aio 00:06:42.645 ************************************ 00:06:42.645 09:21:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:06:42.645 09:21:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:42.645 09:21:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.645 09:21:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:42.645 ************************************ 00:06:42.645 START TEST dd_flags_misc_forced_aio 00:06:42.645 ************************************ 00:06:42.645 09:21:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1125 -- # io 00:06:42.645 09:21:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:42.645 09:21:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:42.645 09:21:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:42.645 09:21:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:42.645 09:21:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:42.645 09:21:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:42.645 09:21:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:42.645 09:21:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:42.645 09:21:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:42.645 [2024-10-16 09:21:07.041879] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:42.646 [2024-10-16 09:21:07.041983] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60660 ] 00:06:42.905 [2024-10-16 09:21:07.183068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.905 [2024-10-16 09:21:07.255043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.163 [2024-10-16 09:21:07.316959] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:43.163  [2024-10-16T09:21:07.826Z] Copying: 512/512 [B] (average 500 kBps) 00:06:43.422 00:06:43.422 09:21:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ xjsi48ly0xa6rwbox96al8knlyquoo0tvgwr2xwzf7dfe84h54fedryfksrf08yxzdjx66mpt8wz024opwkk3261umuea1f5a78ginnaevmwvdmljhlacbk49hat48usxeyooln4hz19j8pp2ma3uudj2yxcqaq1cobr4t9v6uohvr13c0540whctqqx3bj6ymrmt2r42swt80ul575hiai9n0ih74bpwphupiz7hdx3kf6lk56zz61ytqf9kl5s7fsb54if0v1psawwhslg7q0gymh92l89gnl0um3tff4r0zxilltj0lafxa19ihlq2e6mc9rd21h3kmx4c2m0zjhz0slmwrzmgfrmkuejn77j3vxphuqbrx31k7xigtvevyzrevtpcbdk3vlrsw1ro2qqbkhddzsgq1y9ax24ewurnn91647gre4jig9e8l4jrqj6mqjcaqz3gttcz0r3vc567jqe3wqqbdvrz9fon46iekpcu0p6o4uf0j2bhgqt == \x\j\s\i\4\8\l\y\0\x\a\6\r\w\b\o\x\9\6\a\l\8\k\n\l\y\q\u\o\o\0\t\v\g\w\r\2\x\w\z\f\7\d\f\e\8\4\h\5\4\f\e\d\r\y\f\k\s\r\f\0\8\y\x\z\d\j\x\6\6\m\p\t\8\w\z\0\2\4\o\p\w\k\k\3\2\6\1\u\m\u\e\a\1\f\5\a\7\8\g\i\n\n\a\e\v\m\w\v\d\m\l\j\h\l\a\c\b\k\4\9\h\a\t\4\8\u\s\x\e\y\o\o\l\n\4\h\z\1\9\j\8\p\p\2\m\a\3\u\u\d\j\2\y\x\c\q\a\q\1\c\o\b\r\4\t\9\v\6\u\o\h\v\r\1\3\c\0\5\4\0\w\h\c\t\q\q\x\3\b\j\6\y\m\r\m\t\2\r\4\2\s\w\t\8\0\u\l\5\7\5\h\i\a\i\9\n\0\i\h\7\4\b\p\w\p\h\u\p\i\z\7\h\d\x\3\k\f\6\l\k\5\6\z\z\6\1\y\t\q\f\9\k\l\5\s\7\f\s\b\5\4\i\f\0\v\1\p\s\a\w\w\h\s\l\g\7\q\0\g\y\m\h\9\2\l\8\9\g\n\l\0\u\m\3\t\f\f\4\r\0\z\x\i\l\l\t\j\0\l\a\f\x\a\1\9\i\h\l\q\2\e\6\m\c\9\r\d\2\1\h\3\k\m\x\4\c\2\m\0\z\j\h\z\0\s\l\m\w\r\z\m\g\f\r\m\k\u\e\j\n\7\7\j\3\v\x\p\h\u\q\b\r\x\3\1\k\7\x\i\g\t\v\e\v\y\z\r\e\v\t\p\c\b\d\k\3\v\l\r\s\w\1\r\o\2\q\q\b\k\h\d\d\z\s\g\q\1\y\9\a\x\2\4\e\w\u\r\n\n\9\1\6\4\7\g\r\e\4\j\i\g\9\e\8\l\4\j\r\q\j\6\m\q\j\c\a\q\z\3\g\t\t\c\z\0\r\3\v\c\5\6\7\j\q\e\3\w\q\q\b\d\v\r\z\9\f\o\n\4\6\i\e\k\p\c\u\0\p\6\o\4\u\f\0\j\2\b\h\g\q\t ]] 00:06:43.422 09:21:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:43.422 09:21:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:43.422 [2024-10-16 09:21:07.642874] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:43.422 [2024-10-16 09:21:07.642980] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60668 ] 00:06:43.422 [2024-10-16 09:21:07.784641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.681 [2024-10-16 09:21:07.857525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.681 [2024-10-16 09:21:07.920028] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:43.681  [2024-10-16T09:21:08.343Z] Copying: 512/512 [B] (average 500 kBps) 00:06:43.939 00:06:43.940 09:21:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ xjsi48ly0xa6rwbox96al8knlyquoo0tvgwr2xwzf7dfe84h54fedryfksrf08yxzdjx66mpt8wz024opwkk3261umuea1f5a78ginnaevmwvdmljhlacbk49hat48usxeyooln4hz19j8pp2ma3uudj2yxcqaq1cobr4t9v6uohvr13c0540whctqqx3bj6ymrmt2r42swt80ul575hiai9n0ih74bpwphupiz7hdx3kf6lk56zz61ytqf9kl5s7fsb54if0v1psawwhslg7q0gymh92l89gnl0um3tff4r0zxilltj0lafxa19ihlq2e6mc9rd21h3kmx4c2m0zjhz0slmwrzmgfrmkuejn77j3vxphuqbrx31k7xigtvevyzrevtpcbdk3vlrsw1ro2qqbkhddzsgq1y9ax24ewurnn91647gre4jig9e8l4jrqj6mqjcaqz3gttcz0r3vc567jqe3wqqbdvrz9fon46iekpcu0p6o4uf0j2bhgqt == \x\j\s\i\4\8\l\y\0\x\a\6\r\w\b\o\x\9\6\a\l\8\k\n\l\y\q\u\o\o\0\t\v\g\w\r\2\x\w\z\f\7\d\f\e\8\4\h\5\4\f\e\d\r\y\f\k\s\r\f\0\8\y\x\z\d\j\x\6\6\m\p\t\8\w\z\0\2\4\o\p\w\k\k\3\2\6\1\u\m\u\e\a\1\f\5\a\7\8\g\i\n\n\a\e\v\m\w\v\d\m\l\j\h\l\a\c\b\k\4\9\h\a\t\4\8\u\s\x\e\y\o\o\l\n\4\h\z\1\9\j\8\p\p\2\m\a\3\u\u\d\j\2\y\x\c\q\a\q\1\c\o\b\r\4\t\9\v\6\u\o\h\v\r\1\3\c\0\5\4\0\w\h\c\t\q\q\x\3\b\j\6\y\m\r\m\t\2\r\4\2\s\w\t\8\0\u\l\5\7\5\h\i\a\i\9\n\0\i\h\7\4\b\p\w\p\h\u\p\i\z\7\h\d\x\3\k\f\6\l\k\5\6\z\z\6\1\y\t\q\f\9\k\l\5\s\7\f\s\b\5\4\i\f\0\v\1\p\s\a\w\w\h\s\l\g\7\q\0\g\y\m\h\9\2\l\8\9\g\n\l\0\u\m\3\t\f\f\4\r\0\z\x\i\l\l\t\j\0\l\a\f\x\a\1\9\i\h\l\q\2\e\6\m\c\9\r\d\2\1\h\3\k\m\x\4\c\2\m\0\z\j\h\z\0\s\l\m\w\r\z\m\g\f\r\m\k\u\e\j\n\7\7\j\3\v\x\p\h\u\q\b\r\x\3\1\k\7\x\i\g\t\v\e\v\y\z\r\e\v\t\p\c\b\d\k\3\v\l\r\s\w\1\r\o\2\q\q\b\k\h\d\d\z\s\g\q\1\y\9\a\x\2\4\e\w\u\r\n\n\9\1\6\4\7\g\r\e\4\j\i\g\9\e\8\l\4\j\r\q\j\6\m\q\j\c\a\q\z\3\g\t\t\c\z\0\r\3\v\c\5\6\7\j\q\e\3\w\q\q\b\d\v\r\z\9\f\o\n\4\6\i\e\k\p\c\u\0\p\6\o\4\u\f\0\j\2\b\h\g\q\t ]] 00:06:43.940 09:21:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:43.940 09:21:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:43.940 [2024-10-16 09:21:08.230953] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:43.940 [2024-10-16 09:21:08.231069] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60675 ] 00:06:44.198 [2024-10-16 09:21:08.366306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.198 [2024-10-16 09:21:08.440382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.198 [2024-10-16 09:21:08.501107] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:44.198  [2024-10-16T09:21:08.861Z] Copying: 512/512 [B] (average 166 kBps) 00:06:44.457 00:06:44.457 09:21:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ xjsi48ly0xa6rwbox96al8knlyquoo0tvgwr2xwzf7dfe84h54fedryfksrf08yxzdjx66mpt8wz024opwkk3261umuea1f5a78ginnaevmwvdmljhlacbk49hat48usxeyooln4hz19j8pp2ma3uudj2yxcqaq1cobr4t9v6uohvr13c0540whctqqx3bj6ymrmt2r42swt80ul575hiai9n0ih74bpwphupiz7hdx3kf6lk56zz61ytqf9kl5s7fsb54if0v1psawwhslg7q0gymh92l89gnl0um3tff4r0zxilltj0lafxa19ihlq2e6mc9rd21h3kmx4c2m0zjhz0slmwrzmgfrmkuejn77j3vxphuqbrx31k7xigtvevyzrevtpcbdk3vlrsw1ro2qqbkhddzsgq1y9ax24ewurnn91647gre4jig9e8l4jrqj6mqjcaqz3gttcz0r3vc567jqe3wqqbdvrz9fon46iekpcu0p6o4uf0j2bhgqt == \x\j\s\i\4\8\l\y\0\x\a\6\r\w\b\o\x\9\6\a\l\8\k\n\l\y\q\u\o\o\0\t\v\g\w\r\2\x\w\z\f\7\d\f\e\8\4\h\5\4\f\e\d\r\y\f\k\s\r\f\0\8\y\x\z\d\j\x\6\6\m\p\t\8\w\z\0\2\4\o\p\w\k\k\3\2\6\1\u\m\u\e\a\1\f\5\a\7\8\g\i\n\n\a\e\v\m\w\v\d\m\l\j\h\l\a\c\b\k\4\9\h\a\t\4\8\u\s\x\e\y\o\o\l\n\4\h\z\1\9\j\8\p\p\2\m\a\3\u\u\d\j\2\y\x\c\q\a\q\1\c\o\b\r\4\t\9\v\6\u\o\h\v\r\1\3\c\0\5\4\0\w\h\c\t\q\q\x\3\b\j\6\y\m\r\m\t\2\r\4\2\s\w\t\8\0\u\l\5\7\5\h\i\a\i\9\n\0\i\h\7\4\b\p\w\p\h\u\p\i\z\7\h\d\x\3\k\f\6\l\k\5\6\z\z\6\1\y\t\q\f\9\k\l\5\s\7\f\s\b\5\4\i\f\0\v\1\p\s\a\w\w\h\s\l\g\7\q\0\g\y\m\h\9\2\l\8\9\g\n\l\0\u\m\3\t\f\f\4\r\0\z\x\i\l\l\t\j\0\l\a\f\x\a\1\9\i\h\l\q\2\e\6\m\c\9\r\d\2\1\h\3\k\m\x\4\c\2\m\0\z\j\h\z\0\s\l\m\w\r\z\m\g\f\r\m\k\u\e\j\n\7\7\j\3\v\x\p\h\u\q\b\r\x\3\1\k\7\x\i\g\t\v\e\v\y\z\r\e\v\t\p\c\b\d\k\3\v\l\r\s\w\1\r\o\2\q\q\b\k\h\d\d\z\s\g\q\1\y\9\a\x\2\4\e\w\u\r\n\n\9\1\6\4\7\g\r\e\4\j\i\g\9\e\8\l\4\j\r\q\j\6\m\q\j\c\a\q\z\3\g\t\t\c\z\0\r\3\v\c\5\6\7\j\q\e\3\w\q\q\b\d\v\r\z\9\f\o\n\4\6\i\e\k\p\c\u\0\p\6\o\4\u\f\0\j\2\b\h\g\q\t ]] 00:06:44.457 09:21:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:44.457 09:21:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:44.457 [2024-10-16 09:21:08.806511] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:44.457 [2024-10-16 09:21:08.806646] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60683 ] 00:06:44.715 [2024-10-16 09:21:08.944695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.715 [2024-10-16 09:21:09.015907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.715 [2024-10-16 09:21:09.075466] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:44.715  [2024-10-16T09:21:09.378Z] Copying: 512/512 [B] (average 500 kBps) 00:06:44.974 00:06:44.974 09:21:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ xjsi48ly0xa6rwbox96al8knlyquoo0tvgwr2xwzf7dfe84h54fedryfksrf08yxzdjx66mpt8wz024opwkk3261umuea1f5a78ginnaevmwvdmljhlacbk49hat48usxeyooln4hz19j8pp2ma3uudj2yxcqaq1cobr4t9v6uohvr13c0540whctqqx3bj6ymrmt2r42swt80ul575hiai9n0ih74bpwphupiz7hdx3kf6lk56zz61ytqf9kl5s7fsb54if0v1psawwhslg7q0gymh92l89gnl0um3tff4r0zxilltj0lafxa19ihlq2e6mc9rd21h3kmx4c2m0zjhz0slmwrzmgfrmkuejn77j3vxphuqbrx31k7xigtvevyzrevtpcbdk3vlrsw1ro2qqbkhddzsgq1y9ax24ewurnn91647gre4jig9e8l4jrqj6mqjcaqz3gttcz0r3vc567jqe3wqqbdvrz9fon46iekpcu0p6o4uf0j2bhgqt == \x\j\s\i\4\8\l\y\0\x\a\6\r\w\b\o\x\9\6\a\l\8\k\n\l\y\q\u\o\o\0\t\v\g\w\r\2\x\w\z\f\7\d\f\e\8\4\h\5\4\f\e\d\r\y\f\k\s\r\f\0\8\y\x\z\d\j\x\6\6\m\p\t\8\w\z\0\2\4\o\p\w\k\k\3\2\6\1\u\m\u\e\a\1\f\5\a\7\8\g\i\n\n\a\e\v\m\w\v\d\m\l\j\h\l\a\c\b\k\4\9\h\a\t\4\8\u\s\x\e\y\o\o\l\n\4\h\z\1\9\j\8\p\p\2\m\a\3\u\u\d\j\2\y\x\c\q\a\q\1\c\o\b\r\4\t\9\v\6\u\o\h\v\r\1\3\c\0\5\4\0\w\h\c\t\q\q\x\3\b\j\6\y\m\r\m\t\2\r\4\2\s\w\t\8\0\u\l\5\7\5\h\i\a\i\9\n\0\i\h\7\4\b\p\w\p\h\u\p\i\z\7\h\d\x\3\k\f\6\l\k\5\6\z\z\6\1\y\t\q\f\9\k\l\5\s\7\f\s\b\5\4\i\f\0\v\1\p\s\a\w\w\h\s\l\g\7\q\0\g\y\m\h\9\2\l\8\9\g\n\l\0\u\m\3\t\f\f\4\r\0\z\x\i\l\l\t\j\0\l\a\f\x\a\1\9\i\h\l\q\2\e\6\m\c\9\r\d\2\1\h\3\k\m\x\4\c\2\m\0\z\j\h\z\0\s\l\m\w\r\z\m\g\f\r\m\k\u\e\j\n\7\7\j\3\v\x\p\h\u\q\b\r\x\3\1\k\7\x\i\g\t\v\e\v\y\z\r\e\v\t\p\c\b\d\k\3\v\l\r\s\w\1\r\o\2\q\q\b\k\h\d\d\z\s\g\q\1\y\9\a\x\2\4\e\w\u\r\n\n\9\1\6\4\7\g\r\e\4\j\i\g\9\e\8\l\4\j\r\q\j\6\m\q\j\c\a\q\z\3\g\t\t\c\z\0\r\3\v\c\5\6\7\j\q\e\3\w\q\q\b\d\v\r\z\9\f\o\n\4\6\i\e\k\p\c\u\0\p\6\o\4\u\f\0\j\2\b\h\g\q\t ]] 00:06:44.974 09:21:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:44.974 09:21:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:44.974 09:21:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:44.974 09:21:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:44.974 09:21:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:44.974 09:21:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:45.233 [2024-10-16 09:21:09.392723] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:45.233 [2024-10-16 09:21:09.392820] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60690 ] 00:06:45.233 [2024-10-16 09:21:09.531242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.233 [2024-10-16 09:21:09.587100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.493 [2024-10-16 09:21:09.644644] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:45.493  [2024-10-16T09:21:09.897Z] Copying: 512/512 [B] (average 500 kBps) 00:06:45.493 00:06:45.493 09:21:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 8rjbn3tb367vg9l1lbr154vqkm334wdzhri58u1mtl4brh1k0ol6ifmzmxngt8pul7q4ot1lne6drveypplsga2qqzy5692dr98ozevizdi2363ch41oglzqm5xa6u91yu3uu3vzi23rthvgc0csy2mbkek0ahx29p26i7do0e6ozox8299r1qlmc6qcaew2ar9nmnb1vpct5gcjhrnntmup7azj30gxe10mdfr2zgmetlw3hks3k9nrmf938fugzvb1tyyb384zkodm2e30x1dindpiexizus73ixyouucer2fgm7f7s24hepoezg2iz60f76gti0k4a147yk1rkrinadjhx96cof8vs86ljunfe77bdzxmrwt91uc4qesgkfnuewuayiv0tt65j693opvd6k8wliv4vc28uij1v6nrm5j9qv7gekd44f1g05pt8m7bbiusi7uxcxa6eadwx4lxm3e7zh000dyf1dd7qk9070422v4n23k74em1a9j6 == \8\r\j\b\n\3\t\b\3\6\7\v\g\9\l\1\l\b\r\1\5\4\v\q\k\m\3\3\4\w\d\z\h\r\i\5\8\u\1\m\t\l\4\b\r\h\1\k\0\o\l\6\i\f\m\z\m\x\n\g\t\8\p\u\l\7\q\4\o\t\1\l\n\e\6\d\r\v\e\y\p\p\l\s\g\a\2\q\q\z\y\5\6\9\2\d\r\9\8\o\z\e\v\i\z\d\i\2\3\6\3\c\h\4\1\o\g\l\z\q\m\5\x\a\6\u\9\1\y\u\3\u\u\3\v\z\i\2\3\r\t\h\v\g\c\0\c\s\y\2\m\b\k\e\k\0\a\h\x\2\9\p\2\6\i\7\d\o\0\e\6\o\z\o\x\8\2\9\9\r\1\q\l\m\c\6\q\c\a\e\w\2\a\r\9\n\m\n\b\1\v\p\c\t\5\g\c\j\h\r\n\n\t\m\u\p\7\a\z\j\3\0\g\x\e\1\0\m\d\f\r\2\z\g\m\e\t\l\w\3\h\k\s\3\k\9\n\r\m\f\9\3\8\f\u\g\z\v\b\1\t\y\y\b\3\8\4\z\k\o\d\m\2\e\3\0\x\1\d\i\n\d\p\i\e\x\i\z\u\s\7\3\i\x\y\o\u\u\c\e\r\2\f\g\m\7\f\7\s\2\4\h\e\p\o\e\z\g\2\i\z\6\0\f\7\6\g\t\i\0\k\4\a\1\4\7\y\k\1\r\k\r\i\n\a\d\j\h\x\9\6\c\o\f\8\v\s\8\6\l\j\u\n\f\e\7\7\b\d\z\x\m\r\w\t\9\1\u\c\4\q\e\s\g\k\f\n\u\e\w\u\a\y\i\v\0\t\t\6\5\j\6\9\3\o\p\v\d\6\k\8\w\l\i\v\4\v\c\2\8\u\i\j\1\v\6\n\r\m\5\j\9\q\v\7\g\e\k\d\4\4\f\1\g\0\5\p\t\8\m\7\b\b\i\u\s\i\7\u\x\c\x\a\6\e\a\d\w\x\4\l\x\m\3\e\7\z\h\0\0\0\d\y\f\1\d\d\7\q\k\9\0\7\0\4\2\2\v\4\n\2\3\k\7\4\e\m\1\a\9\j\6 ]] 00:06:45.493 09:21:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:45.493 09:21:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:45.752 [2024-10-16 09:21:09.925093] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:45.752 [2024-10-16 09:21:09.925200] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60698 ] 00:06:45.752 [2024-10-16 09:21:10.054219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.752 [2024-10-16 09:21:10.103999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.012 [2024-10-16 09:21:10.159441] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:46.012  [2024-10-16T09:21:10.416Z] Copying: 512/512 [B] (average 500 kBps) 00:06:46.012 00:06:46.012 09:21:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 8rjbn3tb367vg9l1lbr154vqkm334wdzhri58u1mtl4brh1k0ol6ifmzmxngt8pul7q4ot1lne6drveypplsga2qqzy5692dr98ozevizdi2363ch41oglzqm5xa6u91yu3uu3vzi23rthvgc0csy2mbkek0ahx29p26i7do0e6ozox8299r1qlmc6qcaew2ar9nmnb1vpct5gcjhrnntmup7azj30gxe10mdfr2zgmetlw3hks3k9nrmf938fugzvb1tyyb384zkodm2e30x1dindpiexizus73ixyouucer2fgm7f7s24hepoezg2iz60f76gti0k4a147yk1rkrinadjhx96cof8vs86ljunfe77bdzxmrwt91uc4qesgkfnuewuayiv0tt65j693opvd6k8wliv4vc28uij1v6nrm5j9qv7gekd44f1g05pt8m7bbiusi7uxcxa6eadwx4lxm3e7zh000dyf1dd7qk9070422v4n23k74em1a9j6 == \8\r\j\b\n\3\t\b\3\6\7\v\g\9\l\1\l\b\r\1\5\4\v\q\k\m\3\3\4\w\d\z\h\r\i\5\8\u\1\m\t\l\4\b\r\h\1\k\0\o\l\6\i\f\m\z\m\x\n\g\t\8\p\u\l\7\q\4\o\t\1\l\n\e\6\d\r\v\e\y\p\p\l\s\g\a\2\q\q\z\y\5\6\9\2\d\r\9\8\o\z\e\v\i\z\d\i\2\3\6\3\c\h\4\1\o\g\l\z\q\m\5\x\a\6\u\9\1\y\u\3\u\u\3\v\z\i\2\3\r\t\h\v\g\c\0\c\s\y\2\m\b\k\e\k\0\a\h\x\2\9\p\2\6\i\7\d\o\0\e\6\o\z\o\x\8\2\9\9\r\1\q\l\m\c\6\q\c\a\e\w\2\a\r\9\n\m\n\b\1\v\p\c\t\5\g\c\j\h\r\n\n\t\m\u\p\7\a\z\j\3\0\g\x\e\1\0\m\d\f\r\2\z\g\m\e\t\l\w\3\h\k\s\3\k\9\n\r\m\f\9\3\8\f\u\g\z\v\b\1\t\y\y\b\3\8\4\z\k\o\d\m\2\e\3\0\x\1\d\i\n\d\p\i\e\x\i\z\u\s\7\3\i\x\y\o\u\u\c\e\r\2\f\g\m\7\f\7\s\2\4\h\e\p\o\e\z\g\2\i\z\6\0\f\7\6\g\t\i\0\k\4\a\1\4\7\y\k\1\r\k\r\i\n\a\d\j\h\x\9\6\c\o\f\8\v\s\8\6\l\j\u\n\f\e\7\7\b\d\z\x\m\r\w\t\9\1\u\c\4\q\e\s\g\k\f\n\u\e\w\u\a\y\i\v\0\t\t\6\5\j\6\9\3\o\p\v\d\6\k\8\w\l\i\v\4\v\c\2\8\u\i\j\1\v\6\n\r\m\5\j\9\q\v\7\g\e\k\d\4\4\f\1\g\0\5\p\t\8\m\7\b\b\i\u\s\i\7\u\x\c\x\a\6\e\a\d\w\x\4\l\x\m\3\e\7\z\h\0\0\0\d\y\f\1\d\d\7\q\k\9\0\7\0\4\2\2\v\4\n\2\3\k\7\4\e\m\1\a\9\j\6 ]] 00:06:46.012 09:21:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:46.012 09:21:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:46.272 [2024-10-16 09:21:10.439768] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:46.272 [2024-10-16 09:21:10.439869] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60705 ] 00:06:46.272 [2024-10-16 09:21:10.576106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.272 [2024-10-16 09:21:10.633657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.531 [2024-10-16 09:21:10.693895] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:46.531  [2024-10-16T09:21:11.194Z] Copying: 512/512 [B] (average 250 kBps) 00:06:46.790 00:06:46.790 09:21:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 8rjbn3tb367vg9l1lbr154vqkm334wdzhri58u1mtl4brh1k0ol6ifmzmxngt8pul7q4ot1lne6drveypplsga2qqzy5692dr98ozevizdi2363ch41oglzqm5xa6u91yu3uu3vzi23rthvgc0csy2mbkek0ahx29p26i7do0e6ozox8299r1qlmc6qcaew2ar9nmnb1vpct5gcjhrnntmup7azj30gxe10mdfr2zgmetlw3hks3k9nrmf938fugzvb1tyyb384zkodm2e30x1dindpiexizus73ixyouucer2fgm7f7s24hepoezg2iz60f76gti0k4a147yk1rkrinadjhx96cof8vs86ljunfe77bdzxmrwt91uc4qesgkfnuewuayiv0tt65j693opvd6k8wliv4vc28uij1v6nrm5j9qv7gekd44f1g05pt8m7bbiusi7uxcxa6eadwx4lxm3e7zh000dyf1dd7qk9070422v4n23k74em1a9j6 == \8\r\j\b\n\3\t\b\3\6\7\v\g\9\l\1\l\b\r\1\5\4\v\q\k\m\3\3\4\w\d\z\h\r\i\5\8\u\1\m\t\l\4\b\r\h\1\k\0\o\l\6\i\f\m\z\m\x\n\g\t\8\p\u\l\7\q\4\o\t\1\l\n\e\6\d\r\v\e\y\p\p\l\s\g\a\2\q\q\z\y\5\6\9\2\d\r\9\8\o\z\e\v\i\z\d\i\2\3\6\3\c\h\4\1\o\g\l\z\q\m\5\x\a\6\u\9\1\y\u\3\u\u\3\v\z\i\2\3\r\t\h\v\g\c\0\c\s\y\2\m\b\k\e\k\0\a\h\x\2\9\p\2\6\i\7\d\o\0\e\6\o\z\o\x\8\2\9\9\r\1\q\l\m\c\6\q\c\a\e\w\2\a\r\9\n\m\n\b\1\v\p\c\t\5\g\c\j\h\r\n\n\t\m\u\p\7\a\z\j\3\0\g\x\e\1\0\m\d\f\r\2\z\g\m\e\t\l\w\3\h\k\s\3\k\9\n\r\m\f\9\3\8\f\u\g\z\v\b\1\t\y\y\b\3\8\4\z\k\o\d\m\2\e\3\0\x\1\d\i\n\d\p\i\e\x\i\z\u\s\7\3\i\x\y\o\u\u\c\e\r\2\f\g\m\7\f\7\s\2\4\h\e\p\o\e\z\g\2\i\z\6\0\f\7\6\g\t\i\0\k\4\a\1\4\7\y\k\1\r\k\r\i\n\a\d\j\h\x\9\6\c\o\f\8\v\s\8\6\l\j\u\n\f\e\7\7\b\d\z\x\m\r\w\t\9\1\u\c\4\q\e\s\g\k\f\n\u\e\w\u\a\y\i\v\0\t\t\6\5\j\6\9\3\o\p\v\d\6\k\8\w\l\i\v\4\v\c\2\8\u\i\j\1\v\6\n\r\m\5\j\9\q\v\7\g\e\k\d\4\4\f\1\g\0\5\p\t\8\m\7\b\b\i\u\s\i\7\u\x\c\x\a\6\e\a\d\w\x\4\l\x\m\3\e\7\z\h\0\0\0\d\y\f\1\d\d\7\q\k\9\0\7\0\4\2\2\v\4\n\2\3\k\7\4\e\m\1\a\9\j\6 ]] 00:06:46.790 09:21:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:46.790 09:21:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:46.790 [2024-10-16 09:21:11.002328] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:46.790 [2024-10-16 09:21:11.002450] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60713 ] 00:06:46.790 [2024-10-16 09:21:11.142393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.110 [2024-10-16 09:21:11.198663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.110 [2024-10-16 09:21:11.260240] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:47.110  [2024-10-16T09:21:11.772Z] Copying: 512/512 [B] (average 500 kBps) 00:06:47.368 00:06:47.368 09:21:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 8rjbn3tb367vg9l1lbr154vqkm334wdzhri58u1mtl4brh1k0ol6ifmzmxngt8pul7q4ot1lne6drveypplsga2qqzy5692dr98ozevizdi2363ch41oglzqm5xa6u91yu3uu3vzi23rthvgc0csy2mbkek0ahx29p26i7do0e6ozox8299r1qlmc6qcaew2ar9nmnb1vpct5gcjhrnntmup7azj30gxe10mdfr2zgmetlw3hks3k9nrmf938fugzvb1tyyb384zkodm2e30x1dindpiexizus73ixyouucer2fgm7f7s24hepoezg2iz60f76gti0k4a147yk1rkrinadjhx96cof8vs86ljunfe77bdzxmrwt91uc4qesgkfnuewuayiv0tt65j693opvd6k8wliv4vc28uij1v6nrm5j9qv7gekd44f1g05pt8m7bbiusi7uxcxa6eadwx4lxm3e7zh000dyf1dd7qk9070422v4n23k74em1a9j6 == \8\r\j\b\n\3\t\b\3\6\7\v\g\9\l\1\l\b\r\1\5\4\v\q\k\m\3\3\4\w\d\z\h\r\i\5\8\u\1\m\t\l\4\b\r\h\1\k\0\o\l\6\i\f\m\z\m\x\n\g\t\8\p\u\l\7\q\4\o\t\1\l\n\e\6\d\r\v\e\y\p\p\l\s\g\a\2\q\q\z\y\5\6\9\2\d\r\9\8\o\z\e\v\i\z\d\i\2\3\6\3\c\h\4\1\o\g\l\z\q\m\5\x\a\6\u\9\1\y\u\3\u\u\3\v\z\i\2\3\r\t\h\v\g\c\0\c\s\y\2\m\b\k\e\k\0\a\h\x\2\9\p\2\6\i\7\d\o\0\e\6\o\z\o\x\8\2\9\9\r\1\q\l\m\c\6\q\c\a\e\w\2\a\r\9\n\m\n\b\1\v\p\c\t\5\g\c\j\h\r\n\n\t\m\u\p\7\a\z\j\3\0\g\x\e\1\0\m\d\f\r\2\z\g\m\e\t\l\w\3\h\k\s\3\k\9\n\r\m\f\9\3\8\f\u\g\z\v\b\1\t\y\y\b\3\8\4\z\k\o\d\m\2\e\3\0\x\1\d\i\n\d\p\i\e\x\i\z\u\s\7\3\i\x\y\o\u\u\c\e\r\2\f\g\m\7\f\7\s\2\4\h\e\p\o\e\z\g\2\i\z\6\0\f\7\6\g\t\i\0\k\4\a\1\4\7\y\k\1\r\k\r\i\n\a\d\j\h\x\9\6\c\o\f\8\v\s\8\6\l\j\u\n\f\e\7\7\b\d\z\x\m\r\w\t\9\1\u\c\4\q\e\s\g\k\f\n\u\e\w\u\a\y\i\v\0\t\t\6\5\j\6\9\3\o\p\v\d\6\k\8\w\l\i\v\4\v\c\2\8\u\i\j\1\v\6\n\r\m\5\j\9\q\v\7\g\e\k\d\4\4\f\1\g\0\5\p\t\8\m\7\b\b\i\u\s\i\7\u\x\c\x\a\6\e\a\d\w\x\4\l\x\m\3\e\7\z\h\0\0\0\d\y\f\1\d\d\7\q\k\9\0\7\0\4\2\2\v\4\n\2\3\k\7\4\e\m\1\a\9\j\6 ]] 00:06:47.368 00:06:47.368 real 0m4.540s 00:06:47.368 user 0m2.428s 00:06:47.368 sys 0m1.133s 00:06:47.368 09:21:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:47.368 ************************************ 00:06:47.368 09:21:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:47.368 END TEST dd_flags_misc_forced_aio 00:06:47.368 ************************************ 00:06:47.368 09:21:11 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:06:47.368 09:21:11 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:47.368 09:21:11 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:47.368 ************************************ 00:06:47.368 END TEST spdk_dd_posix 00:06:47.368 ************************************ 00:06:47.368 00:06:47.368 real 0m19.821s 00:06:47.368 user 0m9.348s 00:06:47.368 sys 0m6.394s 00:06:47.368 09:21:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:47.368 09:21:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:47.368 09:21:11 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:47.368 09:21:11 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:47.368 09:21:11 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:47.368 09:21:11 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:47.368 ************************************ 00:06:47.368 START TEST spdk_dd_malloc 00:06:47.368 ************************************ 00:06:47.368 09:21:11 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:47.368 * Looking for test storage... 00:06:47.368 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:47.368 09:21:11 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:47.368 09:21:11 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1691 -- # lcov --version 00:06:47.368 09:21:11 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:47.627 09:21:11 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:47.627 09:21:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:47.627 09:21:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:47.627 09:21:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:47.627 09:21:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.627 09:21:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:06:47.627 09:21:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:06:47.627 09:21:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:06:47.627 09:21:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:06:47.627 09:21:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:06:47.627 09:21:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:06:47.627 09:21:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:47.627 09:21:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:06:47.627 09:21:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:06:47.627 09:21:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:47.627 09:21:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.627 09:21:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:06:47.627 09:21:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:06:47.627 09:21:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.627 09:21:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:06:47.627 09:21:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:47.628 09:21:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:06:47.628 09:21:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:06:47.628 09:21:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.628 09:21:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:06:47.628 09:21:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:47.628 09:21:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:47.628 09:21:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:47.628 09:21:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:06:47.628 09:21:11 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.628 09:21:11 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:47.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.628 --rc genhtml_branch_coverage=1 00:06:47.628 --rc genhtml_function_coverage=1 00:06:47.628 --rc genhtml_legend=1 00:06:47.628 --rc geninfo_all_blocks=1 00:06:47.628 --rc geninfo_unexecuted_blocks=1 00:06:47.628 00:06:47.628 ' 00:06:47.628 09:21:11 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:47.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.628 --rc genhtml_branch_coverage=1 00:06:47.628 --rc genhtml_function_coverage=1 00:06:47.628 --rc genhtml_legend=1 00:06:47.628 --rc geninfo_all_blocks=1 00:06:47.628 --rc geninfo_unexecuted_blocks=1 00:06:47.628 00:06:47.628 ' 00:06:47.628 09:21:11 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:47.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.628 --rc genhtml_branch_coverage=1 00:06:47.628 --rc genhtml_function_coverage=1 00:06:47.628 --rc genhtml_legend=1 00:06:47.628 --rc geninfo_all_blocks=1 00:06:47.628 --rc geninfo_unexecuted_blocks=1 00:06:47.628 00:06:47.628 ' 00:06:47.628 09:21:11 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:47.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.628 --rc genhtml_branch_coverage=1 00:06:47.628 --rc genhtml_function_coverage=1 00:06:47.628 --rc genhtml_legend=1 00:06:47.628 --rc geninfo_all_blocks=1 00:06:47.628 --rc geninfo_unexecuted_blocks=1 00:06:47.628 00:06:47.628 ' 00:06:47.628 09:21:11 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:47.628 09:21:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:06:47.628 09:21:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:47.628 09:21:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:47.628 09:21:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:47.628 09:21:11 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.628 09:21:11 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.628 09:21:11 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.628 09:21:11 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:06:47.628 09:21:11 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.628 09:21:11 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:06:47.628 09:21:11 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:47.628 09:21:11 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:47.628 09:21:11 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:47.628 ************************************ 00:06:47.628 START TEST dd_malloc_copy 00:06:47.628 ************************************ 00:06:47.628 09:21:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1125 -- # malloc_copy 00:06:47.628 09:21:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:06:47.628 09:21:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:06:47.628 09:21:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:47.628 09:21:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:06:47.628 09:21:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:06:47.628 09:21:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:06:47.628 09:21:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:06:47.628 09:21:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:06:47.628 09:21:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:47.628 09:21:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:47.628 { 00:06:47.628 "subsystems": [ 00:06:47.628 { 00:06:47.628 "subsystem": "bdev", 00:06:47.628 "config": [ 00:06:47.628 { 00:06:47.628 "params": { 00:06:47.628 "block_size": 512, 00:06:47.628 "num_blocks": 1048576, 00:06:47.628 "name": "malloc0" 00:06:47.628 }, 00:06:47.628 "method": "bdev_malloc_create" 00:06:47.628 }, 00:06:47.628 { 00:06:47.628 "params": { 00:06:47.628 "block_size": 512, 00:06:47.628 "num_blocks": 1048576, 00:06:47.628 "name": "malloc1" 00:06:47.628 }, 00:06:47.628 "method": "bdev_malloc_create" 00:06:47.628 }, 00:06:47.628 { 00:06:47.628 "method": "bdev_wait_for_examine" 00:06:47.628 } 00:06:47.628 ] 00:06:47.628 } 00:06:47.628 ] 00:06:47.628 } 00:06:47.628 [2024-10-16 09:21:11.878463] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:47.628 [2024-10-16 09:21:11.878578] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60795 ] 00:06:47.628 [2024-10-16 09:21:12.016398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.887 [2024-10-16 09:21:12.081473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.887 [2024-10-16 09:21:12.138517] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:49.263  [2024-10-16T09:21:14.602Z] Copying: 175/512 [MB] (175 MBps) [2024-10-16T09:21:15.538Z] Copying: 358/512 [MB] (183 MBps) [2024-10-16T09:21:15.797Z] Copying: 512/512 [MB] (average 188 MBps) 00:06:51.393 00:06:51.393 09:21:15 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:06:51.394 09:21:15 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:06:51.394 09:21:15 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:51.394 09:21:15 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:51.653 [2024-10-16 09:21:15.834074] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:51.653 [2024-10-16 09:21:15.834859] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60848 ] 00:06:51.653 { 00:06:51.653 "subsystems": [ 00:06:51.653 { 00:06:51.653 "subsystem": "bdev", 00:06:51.653 "config": [ 00:06:51.653 { 00:06:51.653 "params": { 00:06:51.653 "block_size": 512, 00:06:51.653 "num_blocks": 1048576, 00:06:51.653 "name": "malloc0" 00:06:51.653 }, 00:06:51.653 "method": "bdev_malloc_create" 00:06:51.653 }, 00:06:51.653 { 00:06:51.653 "params": { 00:06:51.653 "block_size": 512, 00:06:51.653 "num_blocks": 1048576, 00:06:51.653 "name": "malloc1" 00:06:51.653 }, 00:06:51.653 "method": "bdev_malloc_create" 00:06:51.653 }, 00:06:51.653 { 00:06:51.653 "method": "bdev_wait_for_examine" 00:06:51.653 } 00:06:51.653 ] 00:06:51.653 } 00:06:51.653 ] 00:06:51.653 } 00:06:51.653 [2024-10-16 09:21:15.972028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.653 [2024-10-16 09:21:16.012404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.912 [2024-10-16 09:21:16.069278] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:53.286  [2024-10-16T09:21:18.624Z] Copying: 236/512 [MB] (236 MBps) [2024-10-16T09:21:18.624Z] Copying: 472/512 [MB] (236 MBps) [2024-10-16T09:21:19.191Z] Copying: 512/512 [MB] (average 235 MBps) 00:06:54.787 00:06:54.787 00:06:54.787 real 0m7.329s 00:06:54.787 user 0m6.332s 00:06:54.787 sys 0m0.839s 00:06:54.787 09:21:19 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:54.787 09:21:19 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:54.787 ************************************ 00:06:54.787 END TEST dd_malloc_copy 00:06:54.787 ************************************ 00:06:54.787 ************************************ 00:06:54.787 END TEST spdk_dd_malloc 00:06:54.787 ************************************ 00:06:54.787 00:06:54.787 real 0m7.570s 00:06:54.787 user 0m6.459s 00:06:54.787 sys 0m0.954s 00:06:54.787 09:21:19 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:54.787 09:21:19 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:55.046 09:21:19 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:55.046 09:21:19 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:55.046 09:21:19 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:55.046 09:21:19 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:55.046 ************************************ 00:06:55.046 START TEST spdk_dd_bdev_to_bdev 00:06:55.046 ************************************ 00:06:55.046 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:55.046 * Looking for test storage... 00:06:55.046 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:55.046 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:55.046 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:55.046 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1691 -- # lcov --version 00:06:55.046 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:55.046 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.046 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.046 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.046 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.046 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.046 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.046 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.046 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.046 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.047 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.047 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.047 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:06:55.047 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:06:55.047 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.047 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.047 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:06:55.047 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:06:55.047 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.047 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:06:55.047 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.047 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:06:55.047 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:06:55.047 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.047 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:06:55.047 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.047 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.047 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.047 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:06:55.047 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.047 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:55.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.047 --rc genhtml_branch_coverage=1 00:06:55.047 --rc genhtml_function_coverage=1 00:06:55.047 --rc genhtml_legend=1 00:06:55.047 --rc geninfo_all_blocks=1 00:06:55.047 --rc geninfo_unexecuted_blocks=1 00:06:55.047 00:06:55.047 ' 00:06:55.047 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:55.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.047 --rc genhtml_branch_coverage=1 00:06:55.047 --rc genhtml_function_coverage=1 00:06:55.047 --rc genhtml_legend=1 00:06:55.047 --rc geninfo_all_blocks=1 00:06:55.047 --rc geninfo_unexecuted_blocks=1 00:06:55.047 00:06:55.047 ' 00:06:55.047 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:55.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.047 --rc genhtml_branch_coverage=1 00:06:55.047 --rc genhtml_function_coverage=1 00:06:55.047 --rc genhtml_legend=1 00:06:55.047 --rc geninfo_all_blocks=1 00:06:55.047 --rc geninfo_unexecuted_blocks=1 00:06:55.047 00:06:55.047 ' 00:06:55.047 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:55.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.047 --rc genhtml_branch_coverage=1 00:06:55.047 --rc genhtml_function_coverage=1 00:06:55.047 --rc genhtml_legend=1 00:06:55.047 --rc geninfo_all_blocks=1 00:06:55.047 --rc geninfo_unexecuted_blocks=1 00:06:55.047 00:06:55.047 ' 00:06:55.047 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:55.047 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:06:55.047 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:55.047 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:55.047 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:55.047 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.047 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.047 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.047 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:06:55.047 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.047 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:06:55.047 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:06:55.047 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:06:55.047 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:06:55.047 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:06:55.047 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:06:55.047 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:06:55.047 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:06:55.047 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:06:55.047 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:06:55.047 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:55.047 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:55.047 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:06:55.047 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:06:55.047 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:55.047 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:55.047 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:06:55.047 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:06:55.047 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:55.047 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:06:55.047 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:55.047 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:55.047 ************************************ 00:06:55.047 START TEST dd_inflate_file 00:06:55.047 ************************************ 00:06:55.047 09:21:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:55.306 [2024-10-16 09:21:19.501903] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:55.306 [2024-10-16 09:21:19.502003] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60955 ] 00:06:55.306 [2024-10-16 09:21:19.642850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.306 [2024-10-16 09:21:19.702571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.565 [2024-10-16 09:21:19.759636] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:55.565  [2024-10-16T09:21:20.229Z] Copying: 64/64 [MB] (average 1488 MBps) 00:06:55.825 00:06:55.825 00:06:55.825 real 0m0.592s 00:06:55.825 user 0m0.340s 00:06:55.825 sys 0m0.311s 00:06:55.825 09:21:20 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:55.825 09:21:20 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:06:55.825 ************************************ 00:06:55.825 END TEST dd_inflate_file 00:06:55.825 ************************************ 00:06:55.825 09:21:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:06:55.825 09:21:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:06:55.825 09:21:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:55.825 09:21:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:06:55.825 09:21:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:55.825 09:21:20 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:55.825 09:21:20 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:55.825 09:21:20 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:55.825 09:21:20 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:55.825 ************************************ 00:06:55.825 START TEST dd_copy_to_out_bdev 00:06:55.825 ************************************ 00:06:55.825 09:21:20 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:55.825 { 00:06:55.825 "subsystems": [ 00:06:55.825 { 00:06:55.825 "subsystem": "bdev", 00:06:55.825 "config": [ 00:06:55.825 { 00:06:55.825 "params": { 00:06:55.825 "trtype": "pcie", 00:06:55.825 "traddr": "0000:00:10.0", 00:06:55.825 "name": "Nvme0" 00:06:55.825 }, 00:06:55.825 "method": "bdev_nvme_attach_controller" 00:06:55.825 }, 00:06:55.825 { 00:06:55.825 "params": { 00:06:55.825 "trtype": "pcie", 00:06:55.825 "traddr": "0000:00:11.0", 00:06:55.825 "name": "Nvme1" 00:06:55.825 }, 00:06:55.825 "method": "bdev_nvme_attach_controller" 00:06:55.825 }, 00:06:55.825 { 00:06:55.825 "method": "bdev_wait_for_examine" 00:06:55.825 } 00:06:55.825 ] 00:06:55.825 } 00:06:55.825 ] 00:06:55.825 } 00:06:55.825 [2024-10-16 09:21:20.152836] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:55.825 [2024-10-16 09:21:20.152970] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60994 ] 00:06:56.084 [2024-10-16 09:21:20.286930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.084 [2024-10-16 09:21:20.335193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.084 [2024-10-16 09:21:20.389260] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:57.462  [2024-10-16T09:21:21.866Z] Copying: 53/64 [MB] (53 MBps) [2024-10-16T09:21:22.125Z] Copying: 64/64 [MB] (average 53 MBps) 00:06:57.721 00:06:57.721 00:06:57.721 real 0m1.907s 00:06:57.721 user 0m1.671s 00:06:57.721 sys 0m1.530s 00:06:57.721 09:21:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.721 ************************************ 00:06:57.721 END TEST dd_copy_to_out_bdev 00:06:57.721 09:21:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:57.721 ************************************ 00:06:57.721 09:21:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:06:57.721 09:21:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:06:57.721 09:21:22 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:57.721 09:21:22 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.721 09:21:22 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:57.721 ************************************ 00:06:57.721 START TEST dd_offset_magic 00:06:57.721 ************************************ 00:06:57.721 09:21:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1125 -- # offset_magic 00:06:57.721 09:21:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:06:57.721 09:21:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:06:57.721 09:21:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:06:57.721 09:21:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:57.721 09:21:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:06:57.721 09:21:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:57.721 09:21:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:57.721 09:21:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:57.721 [2024-10-16 09:21:22.115594] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:57.721 [2024-10-16 09:21:22.115691] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61039 ] 00:06:57.721 { 00:06:57.721 "subsystems": [ 00:06:57.721 { 00:06:57.721 "subsystem": "bdev", 00:06:57.721 "config": [ 00:06:57.721 { 00:06:57.721 "params": { 00:06:57.721 "trtype": "pcie", 00:06:57.721 "traddr": "0000:00:10.0", 00:06:57.721 "name": "Nvme0" 00:06:57.721 }, 00:06:57.721 "method": "bdev_nvme_attach_controller" 00:06:57.721 }, 00:06:57.721 { 00:06:57.721 "params": { 00:06:57.721 "trtype": "pcie", 00:06:57.721 "traddr": "0000:00:11.0", 00:06:57.721 "name": "Nvme1" 00:06:57.721 }, 00:06:57.721 "method": "bdev_nvme_attach_controller" 00:06:57.721 }, 00:06:57.721 { 00:06:57.721 "method": "bdev_wait_for_examine" 00:06:57.721 } 00:06:57.721 ] 00:06:57.721 } 00:06:57.721 ] 00:06:57.721 } 00:06:57.980 [2024-10-16 09:21:22.252823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.980 [2024-10-16 09:21:22.293416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.980 [2024-10-16 09:21:22.345521] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:58.239  [2024-10-16T09:21:22.902Z] Copying: 65/65 [MB] (average 822 MBps) 00:06:58.498 00:06:58.498 09:21:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:06:58.498 09:21:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:58.498 09:21:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:58.498 09:21:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:58.498 [2024-10-16 09:21:22.890063] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:58.498 [2024-10-16 09:21:22.890166] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61058 ] 00:06:58.498 { 00:06:58.498 "subsystems": [ 00:06:58.498 { 00:06:58.498 "subsystem": "bdev", 00:06:58.498 "config": [ 00:06:58.498 { 00:06:58.498 "params": { 00:06:58.498 "trtype": "pcie", 00:06:58.498 "traddr": "0000:00:10.0", 00:06:58.498 "name": "Nvme0" 00:06:58.498 }, 00:06:58.498 "method": "bdev_nvme_attach_controller" 00:06:58.498 }, 00:06:58.498 { 00:06:58.498 "params": { 00:06:58.498 "trtype": "pcie", 00:06:58.498 "traddr": "0000:00:11.0", 00:06:58.498 "name": "Nvme1" 00:06:58.498 }, 00:06:58.498 "method": "bdev_nvme_attach_controller" 00:06:58.498 }, 00:06:58.498 { 00:06:58.498 "method": "bdev_wait_for_examine" 00:06:58.498 } 00:06:58.498 ] 00:06:58.498 } 00:06:58.498 ] 00:06:58.498 } 00:06:58.757 [2024-10-16 09:21:23.027971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.757 [2024-10-16 09:21:23.066740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.757 [2024-10-16 09:21:23.123301] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:59.017  [2024-10-16T09:21:23.680Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:59.276 00:06:59.276 09:21:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:59.276 09:21:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:59.276 09:21:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:59.276 09:21:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:06:59.276 09:21:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:59.276 09:21:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:59.276 09:21:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:59.276 [2024-10-16 09:21:23.533185] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:06:59.276 [2024-10-16 09:21:23.533288] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61070 ] 00:06:59.276 { 00:06:59.276 "subsystems": [ 00:06:59.276 { 00:06:59.276 "subsystem": "bdev", 00:06:59.276 "config": [ 00:06:59.276 { 00:06:59.276 "params": { 00:06:59.276 "trtype": "pcie", 00:06:59.276 "traddr": "0000:00:10.0", 00:06:59.276 "name": "Nvme0" 00:06:59.276 }, 00:06:59.276 "method": "bdev_nvme_attach_controller" 00:06:59.276 }, 00:06:59.276 { 00:06:59.276 "params": { 00:06:59.276 "trtype": "pcie", 00:06:59.276 "traddr": "0000:00:11.0", 00:06:59.276 "name": "Nvme1" 00:06:59.276 }, 00:06:59.276 "method": "bdev_nvme_attach_controller" 00:06:59.276 }, 00:06:59.276 { 00:06:59.276 "method": "bdev_wait_for_examine" 00:06:59.276 } 00:06:59.276 ] 00:06:59.276 } 00:06:59.276 ] 00:06:59.276 } 00:06:59.276 [2024-10-16 09:21:23.665826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.535 [2024-10-16 09:21:23.717799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.535 [2024-10-16 09:21:23.773953] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:59.796  [2024-10-16T09:21:24.460Z] Copying: 65/65 [MB] (average 915 MBps) 00:07:00.056 00:07:00.056 09:21:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:07:00.056 09:21:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:00.056 09:21:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:00.056 09:21:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:00.056 [2024-10-16 09:21:24.319648] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:07:00.056 [2024-10-16 09:21:24.319744] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61090 ] 00:07:00.056 { 00:07:00.056 "subsystems": [ 00:07:00.056 { 00:07:00.056 "subsystem": "bdev", 00:07:00.056 "config": [ 00:07:00.056 { 00:07:00.056 "params": { 00:07:00.056 "trtype": "pcie", 00:07:00.056 "traddr": "0000:00:10.0", 00:07:00.056 "name": "Nvme0" 00:07:00.056 }, 00:07:00.056 "method": "bdev_nvme_attach_controller" 00:07:00.056 }, 00:07:00.056 { 00:07:00.056 "params": { 00:07:00.056 "trtype": "pcie", 00:07:00.056 "traddr": "0000:00:11.0", 00:07:00.056 "name": "Nvme1" 00:07:00.056 }, 00:07:00.056 "method": "bdev_nvme_attach_controller" 00:07:00.056 }, 00:07:00.056 { 00:07:00.056 "method": "bdev_wait_for_examine" 00:07:00.056 } 00:07:00.056 ] 00:07:00.056 } 00:07:00.056 ] 00:07:00.056 } 00:07:00.056 [2024-10-16 09:21:24.456879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.314 [2024-10-16 09:21:24.497612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.314 [2024-10-16 09:21:24.549507] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:00.573  [2024-10-16T09:21:24.977Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:07:00.573 00:07:00.573 09:21:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:00.573 09:21:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:00.574 00:07:00.574 real 0m2.860s 00:07:00.574 user 0m2.040s 00:07:00.574 sys 0m0.963s 00:07:00.574 09:21:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.574 09:21:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:00.574 ************************************ 00:07:00.574 END TEST dd_offset_magic 00:07:00.574 ************************************ 00:07:00.574 09:21:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:07:00.574 09:21:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:07:00.574 09:21:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:00.574 09:21:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:00.574 09:21:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:00.574 09:21:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:00.574 09:21:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:00.574 09:21:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:07:00.574 09:21:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:00.574 09:21:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:00.574 09:21:24 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:00.833 [2024-10-16 09:21:25.018061] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:07:00.833 [2024-10-16 09:21:25.018147] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61127 ] 00:07:00.833 { 00:07:00.833 "subsystems": [ 00:07:00.833 { 00:07:00.833 "subsystem": "bdev", 00:07:00.833 "config": [ 00:07:00.833 { 00:07:00.833 "params": { 00:07:00.833 "trtype": "pcie", 00:07:00.833 "traddr": "0000:00:10.0", 00:07:00.833 "name": "Nvme0" 00:07:00.833 }, 00:07:00.833 "method": "bdev_nvme_attach_controller" 00:07:00.833 }, 00:07:00.833 { 00:07:00.833 "params": { 00:07:00.833 "trtype": "pcie", 00:07:00.833 "traddr": "0000:00:11.0", 00:07:00.833 "name": "Nvme1" 00:07:00.833 }, 00:07:00.833 "method": "bdev_nvme_attach_controller" 00:07:00.833 }, 00:07:00.833 { 00:07:00.833 "method": "bdev_wait_for_examine" 00:07:00.833 } 00:07:00.833 ] 00:07:00.833 } 00:07:00.833 ] 00:07:00.833 } 00:07:00.833 [2024-10-16 09:21:25.154982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.833 [2024-10-16 09:21:25.198844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.092 [2024-10-16 09:21:25.253029] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:01.092  [2024-10-16T09:21:25.755Z] Copying: 5120/5120 [kB] (average 833 MBps) 00:07:01.351 00:07:01.351 09:21:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:07:01.351 09:21:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:07:01.351 09:21:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:01.351 09:21:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:01.351 09:21:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:01.351 09:21:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:01.351 09:21:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:07:01.351 09:21:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:01.351 09:21:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:01.351 09:21:25 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:01.351 [2024-10-16 09:21:25.685181] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:07:01.351 [2024-10-16 09:21:25.685280] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61137 ] 00:07:01.351 { 00:07:01.351 "subsystems": [ 00:07:01.351 { 00:07:01.351 "subsystem": "bdev", 00:07:01.351 "config": [ 00:07:01.351 { 00:07:01.351 "params": { 00:07:01.351 "trtype": "pcie", 00:07:01.351 "traddr": "0000:00:10.0", 00:07:01.351 "name": "Nvme0" 00:07:01.351 }, 00:07:01.351 "method": "bdev_nvme_attach_controller" 00:07:01.351 }, 00:07:01.351 { 00:07:01.351 "params": { 00:07:01.351 "trtype": "pcie", 00:07:01.351 "traddr": "0000:00:11.0", 00:07:01.351 "name": "Nvme1" 00:07:01.351 }, 00:07:01.351 "method": "bdev_nvme_attach_controller" 00:07:01.351 }, 00:07:01.351 { 00:07:01.351 "method": "bdev_wait_for_examine" 00:07:01.351 } 00:07:01.351 ] 00:07:01.351 } 00:07:01.351 ] 00:07:01.351 } 00:07:01.610 [2024-10-16 09:21:25.823034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.610 [2024-10-16 09:21:25.876325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.610 [2024-10-16 09:21:25.935095] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:01.869  [2024-10-16T09:21:26.532Z] Copying: 5120/5120 [kB] (average 714 MBps) 00:07:02.128 00:07:02.128 09:21:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:07:02.128 ************************************ 00:07:02.128 END TEST spdk_dd_bdev_to_bdev 00:07:02.128 ************************************ 00:07:02.128 00:07:02.128 real 0m7.086s 00:07:02.128 user 0m5.163s 00:07:02.128 sys 0m3.602s 00:07:02.128 09:21:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.128 09:21:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:02.128 09:21:26 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:07:02.128 09:21:26 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:02.128 09:21:26 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:02.128 09:21:26 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.128 09:21:26 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:02.128 ************************************ 00:07:02.128 START TEST spdk_dd_uring 00:07:02.128 ************************************ 00:07:02.128 09:21:26 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:02.128 * Looking for test storage... 00:07:02.128 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:02.128 09:21:26 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:02.128 09:21:26 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:02.128 09:21:26 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1691 -- # lcov --version 00:07:02.388 09:21:26 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:02.388 09:21:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:02.388 09:21:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:02.388 09:21:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:02.388 09:21:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:07:02.388 09:21:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:07:02.388 09:21:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:07:02.388 09:21:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:07:02.388 09:21:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:07:02.388 09:21:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:07:02.388 09:21:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:07:02.388 09:21:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:02.388 09:21:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:07:02.388 09:21:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:07:02.388 09:21:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:02.388 09:21:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:02.388 09:21:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:07:02.388 09:21:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:07:02.388 09:21:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:02.388 09:21:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:07:02.388 09:21:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:07:02.388 09:21:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:07:02.388 09:21:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:07:02.388 09:21:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:02.388 09:21:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:07:02.388 09:21:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:07:02.388 09:21:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:02.388 09:21:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:02.388 09:21:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:07:02.388 09:21:26 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:02.388 09:21:26 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:02.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.388 --rc genhtml_branch_coverage=1 00:07:02.388 --rc genhtml_function_coverage=1 00:07:02.388 --rc genhtml_legend=1 00:07:02.388 --rc geninfo_all_blocks=1 00:07:02.388 --rc geninfo_unexecuted_blocks=1 00:07:02.388 00:07:02.388 ' 00:07:02.388 09:21:26 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:02.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.388 --rc genhtml_branch_coverage=1 00:07:02.388 --rc genhtml_function_coverage=1 00:07:02.388 --rc genhtml_legend=1 00:07:02.388 --rc geninfo_all_blocks=1 00:07:02.388 --rc geninfo_unexecuted_blocks=1 00:07:02.388 00:07:02.388 ' 00:07:02.388 09:21:26 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:02.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.388 --rc genhtml_branch_coverage=1 00:07:02.388 --rc genhtml_function_coverage=1 00:07:02.388 --rc genhtml_legend=1 00:07:02.388 --rc geninfo_all_blocks=1 00:07:02.388 --rc geninfo_unexecuted_blocks=1 00:07:02.388 00:07:02.388 ' 00:07:02.388 09:21:26 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:02.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.388 --rc genhtml_branch_coverage=1 00:07:02.388 --rc genhtml_function_coverage=1 00:07:02.388 --rc genhtml_legend=1 00:07:02.388 --rc geninfo_all_blocks=1 00:07:02.388 --rc geninfo_unexecuted_blocks=1 00:07:02.388 00:07:02.388 ' 00:07:02.388 09:21:26 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:02.388 09:21:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:07:02.388 09:21:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:02.388 09:21:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:02.388 09:21:26 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:02.388 09:21:26 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.388 09:21:26 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.389 09:21:26 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.389 09:21:26 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:07:02.389 09:21:26 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.389 09:21:26 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:07:02.389 09:21:26 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:02.389 09:21:26 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.389 09:21:26 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:02.389 ************************************ 00:07:02.389 START TEST dd_uring_copy 00:07:02.389 ************************************ 00:07:02.389 09:21:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1125 -- # uring_zram_copy 00:07:02.389 09:21:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:07:02.389 09:21:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:07:02.389 09:21:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:07:02.389 09:21:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:02.389 09:21:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:07:02.389 09:21:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:07:02.389 09:21:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:07:02.389 09:21:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:07:02.389 09:21:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:07:02.389 09:21:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:07:02.389 09:21:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:07:02.389 09:21:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:07:02.389 09:21:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:07:02.389 09:21:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:07:02.389 09:21:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:07:02.389 09:21:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:07:02.389 09:21:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:07:02.389 09:21:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:07:02.389 09:21:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:07:02.389 09:21:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:07:02.389 09:21:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:02.389 09:21:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:07:02.389 09:21:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:07:02.389 09:21:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:07:02.389 09:21:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:02.389 09:21:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=kozqodgho5zyn383tqem47lk3t8e9jnft2fsaltuhv2jf33dz1i5oo7t1jcicv6h5s6sh9wzkq7wt6nkghu3j9s33a4i39rmruicv0hoo38mwgl49yppjwi4huxaardliau9rbfsrghmhxma13dvjseft3viieh7jkf8xmyj5p9ltpyvb0ppgkqcxy2t94253jzrlxnkywbu4g0e3fone1eoa5ymignbv21vw1zoiq571p8sgwjj07nzj8sp6e2rvt7abqc0qvpy91wxp9jbw9g3ltzsgzvfl1pfqatui2kglxjo5ncsmmm87323cb6w7j3wzqe4vlryar6qsweo1ilf0f6xet513nrnioe47mrrx2dgkbewojh0odzj4955kk5qf2rrfhnr66r3a8wcgqvjm4nsf94ccmrox8i50vfkep9alqfakrnh79kspim2j4w1b1s5t7dvuh19xm6bfj7m3r2ro8n7b3wzc13ovqvl8zir2stcii26vu35hbiktjnfdnwfj3j3gel8g32oeijizzvfcuqyz6tlq6ec82eu5px4i50qe7f4d86ypsqbmu3d65dc9kzp2kr1x88d3qhrxseifmy1zpfu0u3dm5wzphyxtnonrj7pxqjx1j234o0gp44uv20f25cdto3su2gx49vopyc4eudyuai4lijvjg0v8ypa5947owv3yb3p44qr3narln6ilr3aztwbez9dzhqqrd1i4v8wf9yldif8s1dl56js6jxkjk2q4nviqdjh77u49mmuenk4mc1fc60qm4j0hdihbxjdeotyyv2ql6nczjoghnlwb7rlsxbrs2g359y6p7otngiqdu9xvyf580owdmjjizkdpwu8dy2vmw2dceg47hg9tq8w2fm60cdhwo0ypw8od6vshuau65meoh7pf28ajmrukirr5ldh1hy7cyf2471noef5mt0431zjkheaf08pfsehnam91d7ueughx051y1iglc9smjoovgymr08ostff9p7rd4vm 00:07:02.389 09:21:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo kozqodgho5zyn383tqem47lk3t8e9jnft2fsaltuhv2jf33dz1i5oo7t1jcicv6h5s6sh9wzkq7wt6nkghu3j9s33a4i39rmruicv0hoo38mwgl49yppjwi4huxaardliau9rbfsrghmhxma13dvjseft3viieh7jkf8xmyj5p9ltpyvb0ppgkqcxy2t94253jzrlxnkywbu4g0e3fone1eoa5ymignbv21vw1zoiq571p8sgwjj07nzj8sp6e2rvt7abqc0qvpy91wxp9jbw9g3ltzsgzvfl1pfqatui2kglxjo5ncsmmm87323cb6w7j3wzqe4vlryar6qsweo1ilf0f6xet513nrnioe47mrrx2dgkbewojh0odzj4955kk5qf2rrfhnr66r3a8wcgqvjm4nsf94ccmrox8i50vfkep9alqfakrnh79kspim2j4w1b1s5t7dvuh19xm6bfj7m3r2ro8n7b3wzc13ovqvl8zir2stcii26vu35hbiktjnfdnwfj3j3gel8g32oeijizzvfcuqyz6tlq6ec82eu5px4i50qe7f4d86ypsqbmu3d65dc9kzp2kr1x88d3qhrxseifmy1zpfu0u3dm5wzphyxtnonrj7pxqjx1j234o0gp44uv20f25cdto3su2gx49vopyc4eudyuai4lijvjg0v8ypa5947owv3yb3p44qr3narln6ilr3aztwbez9dzhqqrd1i4v8wf9yldif8s1dl56js6jxkjk2q4nviqdjh77u49mmuenk4mc1fc60qm4j0hdihbxjdeotyyv2ql6nczjoghnlwb7rlsxbrs2g359y6p7otngiqdu9xvyf580owdmjjizkdpwu8dy2vmw2dceg47hg9tq8w2fm60cdhwo0ypw8od6vshuau65meoh7pf28ajmrukirr5ldh1hy7cyf2471noef5mt0431zjkheaf08pfsehnam91d7ueughx051y1iglc9smjoovgymr08ostff9p7rd4vm 00:07:02.389 09:21:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:07:02.389 [2024-10-16 09:21:26.656916] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:07:02.389 [2024-10-16 09:21:26.657184] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61217 ] 00:07:02.648 [2024-10-16 09:21:26.793199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.648 [2024-10-16 09:21:26.835097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.648 [2024-10-16 09:21:26.887741] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:03.215  [2024-10-16T09:21:28.187Z] Copying: 511/511 [MB] (average 1034 MBps) 00:07:03.783 00:07:03.783 09:21:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:07:03.783 09:21:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:07:03.783 09:21:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:03.783 09:21:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:03.783 [2024-10-16 09:21:28.020331] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:07:03.783 [2024-10-16 09:21:28.020664] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61243 ] 00:07:03.783 { 00:07:03.783 "subsystems": [ 00:07:03.783 { 00:07:03.783 "subsystem": "bdev", 00:07:03.783 "config": [ 00:07:03.783 { 00:07:03.783 "params": { 00:07:03.783 "block_size": 512, 00:07:03.783 "num_blocks": 1048576, 00:07:03.783 "name": "malloc0" 00:07:03.783 }, 00:07:03.783 "method": "bdev_malloc_create" 00:07:03.783 }, 00:07:03.783 { 00:07:03.783 "params": { 00:07:03.783 "filename": "/dev/zram1", 00:07:03.783 "name": "uring0" 00:07:03.783 }, 00:07:03.783 "method": "bdev_uring_create" 00:07:03.783 }, 00:07:03.783 { 00:07:03.783 "method": "bdev_wait_for_examine" 00:07:03.783 } 00:07:03.783 ] 00:07:03.783 } 00:07:03.783 ] 00:07:03.783 } 00:07:03.783 [2024-10-16 09:21:28.154899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.042 [2024-10-16 09:21:28.207915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.042 [2024-10-16 09:21:28.262910] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:05.419  [2024-10-16T09:21:30.769Z] Copying: 250/512 [MB] (250 MBps) [2024-10-16T09:21:30.769Z] Copying: 500/512 [MB] (249 MBps) [2024-10-16T09:21:31.028Z] Copying: 512/512 [MB] (average 250 MBps) 00:07:06.624 00:07:06.624 09:21:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:07:06.624 09:21:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:07:06.624 09:21:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:06.624 09:21:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:06.624 [2024-10-16 09:21:30.920814] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:07:06.624 [2024-10-16 09:21:30.920910] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61277 ] 00:07:06.624 { 00:07:06.625 "subsystems": [ 00:07:06.625 { 00:07:06.625 "subsystem": "bdev", 00:07:06.625 "config": [ 00:07:06.625 { 00:07:06.625 "params": { 00:07:06.625 "block_size": 512, 00:07:06.625 "num_blocks": 1048576, 00:07:06.625 "name": "malloc0" 00:07:06.625 }, 00:07:06.625 "method": "bdev_malloc_create" 00:07:06.625 }, 00:07:06.625 { 00:07:06.625 "params": { 00:07:06.625 "filename": "/dev/zram1", 00:07:06.625 "name": "uring0" 00:07:06.625 }, 00:07:06.625 "method": "bdev_uring_create" 00:07:06.625 }, 00:07:06.625 { 00:07:06.625 "method": "bdev_wait_for_examine" 00:07:06.625 } 00:07:06.625 ] 00:07:06.625 } 00:07:06.625 ] 00:07:06.625 } 00:07:06.884 [2024-10-16 09:21:31.055629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.884 [2024-10-16 09:21:31.096140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.884 [2024-10-16 09:21:31.153068] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:08.261  [2024-10-16T09:21:33.601Z] Copying: 194/512 [MB] (194 MBps) [2024-10-16T09:21:34.537Z] Copying: 378/512 [MB] (184 MBps) [2024-10-16T09:21:34.797Z] Copying: 512/512 [MB] (average 181 MBps) 00:07:10.393 00:07:10.393 09:21:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:07:10.393 09:21:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ kozqodgho5zyn383tqem47lk3t8e9jnft2fsaltuhv2jf33dz1i5oo7t1jcicv6h5s6sh9wzkq7wt6nkghu3j9s33a4i39rmruicv0hoo38mwgl49yppjwi4huxaardliau9rbfsrghmhxma13dvjseft3viieh7jkf8xmyj5p9ltpyvb0ppgkqcxy2t94253jzrlxnkywbu4g0e3fone1eoa5ymignbv21vw1zoiq571p8sgwjj07nzj8sp6e2rvt7abqc0qvpy91wxp9jbw9g3ltzsgzvfl1pfqatui2kglxjo5ncsmmm87323cb6w7j3wzqe4vlryar6qsweo1ilf0f6xet513nrnioe47mrrx2dgkbewojh0odzj4955kk5qf2rrfhnr66r3a8wcgqvjm4nsf94ccmrox8i50vfkep9alqfakrnh79kspim2j4w1b1s5t7dvuh19xm6bfj7m3r2ro8n7b3wzc13ovqvl8zir2stcii26vu35hbiktjnfdnwfj3j3gel8g32oeijizzvfcuqyz6tlq6ec82eu5px4i50qe7f4d86ypsqbmu3d65dc9kzp2kr1x88d3qhrxseifmy1zpfu0u3dm5wzphyxtnonrj7pxqjx1j234o0gp44uv20f25cdto3su2gx49vopyc4eudyuai4lijvjg0v8ypa5947owv3yb3p44qr3narln6ilr3aztwbez9dzhqqrd1i4v8wf9yldif8s1dl56js6jxkjk2q4nviqdjh77u49mmuenk4mc1fc60qm4j0hdihbxjdeotyyv2ql6nczjoghnlwb7rlsxbrs2g359y6p7otngiqdu9xvyf580owdmjjizkdpwu8dy2vmw2dceg47hg9tq8w2fm60cdhwo0ypw8od6vshuau65meoh7pf28ajmrukirr5ldh1hy7cyf2471noef5mt0431zjkheaf08pfsehnam91d7ueughx051y1iglc9smjoovgymr08ostff9p7rd4vm == \k\o\z\q\o\d\g\h\o\5\z\y\n\3\8\3\t\q\e\m\4\7\l\k\3\t\8\e\9\j\n\f\t\2\f\s\a\l\t\u\h\v\2\j\f\3\3\d\z\1\i\5\o\o\7\t\1\j\c\i\c\v\6\h\5\s\6\s\h\9\w\z\k\q\7\w\t\6\n\k\g\h\u\3\j\9\s\3\3\a\4\i\3\9\r\m\r\u\i\c\v\0\h\o\o\3\8\m\w\g\l\4\9\y\p\p\j\w\i\4\h\u\x\a\a\r\d\l\i\a\u\9\r\b\f\s\r\g\h\m\h\x\m\a\1\3\d\v\j\s\e\f\t\3\v\i\i\e\h\7\j\k\f\8\x\m\y\j\5\p\9\l\t\p\y\v\b\0\p\p\g\k\q\c\x\y\2\t\9\4\2\5\3\j\z\r\l\x\n\k\y\w\b\u\4\g\0\e\3\f\o\n\e\1\e\o\a\5\y\m\i\g\n\b\v\2\1\v\w\1\z\o\i\q\5\7\1\p\8\s\g\w\j\j\0\7\n\z\j\8\s\p\6\e\2\r\v\t\7\a\b\q\c\0\q\v\p\y\9\1\w\x\p\9\j\b\w\9\g\3\l\t\z\s\g\z\v\f\l\1\p\f\q\a\t\u\i\2\k\g\l\x\j\o\5\n\c\s\m\m\m\8\7\3\2\3\c\b\6\w\7\j\3\w\z\q\e\4\v\l\r\y\a\r\6\q\s\w\e\o\1\i\l\f\0\f\6\x\e\t\5\1\3\n\r\n\i\o\e\4\7\m\r\r\x\2\d\g\k\b\e\w\o\j\h\0\o\d\z\j\4\9\5\5\k\k\5\q\f\2\r\r\f\h\n\r\6\6\r\3\a\8\w\c\g\q\v\j\m\4\n\s\f\9\4\c\c\m\r\o\x\8\i\5\0\v\f\k\e\p\9\a\l\q\f\a\k\r\n\h\7\9\k\s\p\i\m\2\j\4\w\1\b\1\s\5\t\7\d\v\u\h\1\9\x\m\6\b\f\j\7\m\3\r\2\r\o\8\n\7\b\3\w\z\c\1\3\o\v\q\v\l\8\z\i\r\2\s\t\c\i\i\2\6\v\u\3\5\h\b\i\k\t\j\n\f\d\n\w\f\j\3\j\3\g\e\l\8\g\3\2\o\e\i\j\i\z\z\v\f\c\u\q\y\z\6\t\l\q\6\e\c\8\2\e\u\5\p\x\4\i\5\0\q\e\7\f\4\d\8\6\y\p\s\q\b\m\u\3\d\6\5\d\c\9\k\z\p\2\k\r\1\x\8\8\d\3\q\h\r\x\s\e\i\f\m\y\1\z\p\f\u\0\u\3\d\m\5\w\z\p\h\y\x\t\n\o\n\r\j\7\p\x\q\j\x\1\j\2\3\4\o\0\g\p\4\4\u\v\2\0\f\2\5\c\d\t\o\3\s\u\2\g\x\4\9\v\o\p\y\c\4\e\u\d\y\u\a\i\4\l\i\j\v\j\g\0\v\8\y\p\a\5\9\4\7\o\w\v\3\y\b\3\p\4\4\q\r\3\n\a\r\l\n\6\i\l\r\3\a\z\t\w\b\e\z\9\d\z\h\q\q\r\d\1\i\4\v\8\w\f\9\y\l\d\i\f\8\s\1\d\l\5\6\j\s\6\j\x\k\j\k\2\q\4\n\v\i\q\d\j\h\7\7\u\4\9\m\m\u\e\n\k\4\m\c\1\f\c\6\0\q\m\4\j\0\h\d\i\h\b\x\j\d\e\o\t\y\y\v\2\q\l\6\n\c\z\j\o\g\h\n\l\w\b\7\r\l\s\x\b\r\s\2\g\3\5\9\y\6\p\7\o\t\n\g\i\q\d\u\9\x\v\y\f\5\8\0\o\w\d\m\j\j\i\z\k\d\p\w\u\8\d\y\2\v\m\w\2\d\c\e\g\4\7\h\g\9\t\q\8\w\2\f\m\6\0\c\d\h\w\o\0\y\p\w\8\o\d\6\v\s\h\u\a\u\6\5\m\e\o\h\7\p\f\2\8\a\j\m\r\u\k\i\r\r\5\l\d\h\1\h\y\7\c\y\f\2\4\7\1\n\o\e\f\5\m\t\0\4\3\1\z\j\k\h\e\a\f\0\8\p\f\s\e\h\n\a\m\9\1\d\7\u\e\u\g\h\x\0\5\1\y\1\i\g\l\c\9\s\m\j\o\o\v\g\y\m\r\0\8\o\s\t\f\f\9\p\7\r\d\4\v\m ]] 00:07:10.393 09:21:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:07:10.394 09:21:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ kozqodgho5zyn383tqem47lk3t8e9jnft2fsaltuhv2jf33dz1i5oo7t1jcicv6h5s6sh9wzkq7wt6nkghu3j9s33a4i39rmruicv0hoo38mwgl49yppjwi4huxaardliau9rbfsrghmhxma13dvjseft3viieh7jkf8xmyj5p9ltpyvb0ppgkqcxy2t94253jzrlxnkywbu4g0e3fone1eoa5ymignbv21vw1zoiq571p8sgwjj07nzj8sp6e2rvt7abqc0qvpy91wxp9jbw9g3ltzsgzvfl1pfqatui2kglxjo5ncsmmm87323cb6w7j3wzqe4vlryar6qsweo1ilf0f6xet513nrnioe47mrrx2dgkbewojh0odzj4955kk5qf2rrfhnr66r3a8wcgqvjm4nsf94ccmrox8i50vfkep9alqfakrnh79kspim2j4w1b1s5t7dvuh19xm6bfj7m3r2ro8n7b3wzc13ovqvl8zir2stcii26vu35hbiktjnfdnwfj3j3gel8g32oeijizzvfcuqyz6tlq6ec82eu5px4i50qe7f4d86ypsqbmu3d65dc9kzp2kr1x88d3qhrxseifmy1zpfu0u3dm5wzphyxtnonrj7pxqjx1j234o0gp44uv20f25cdto3su2gx49vopyc4eudyuai4lijvjg0v8ypa5947owv3yb3p44qr3narln6ilr3aztwbez9dzhqqrd1i4v8wf9yldif8s1dl56js6jxkjk2q4nviqdjh77u49mmuenk4mc1fc60qm4j0hdihbxjdeotyyv2ql6nczjoghnlwb7rlsxbrs2g359y6p7otngiqdu9xvyf580owdmjjizkdpwu8dy2vmw2dceg47hg9tq8w2fm60cdhwo0ypw8od6vshuau65meoh7pf28ajmrukirr5ldh1hy7cyf2471noef5mt0431zjkheaf08pfsehnam91d7ueughx051y1iglc9smjoovgymr08ostff9p7rd4vm == \k\o\z\q\o\d\g\h\o\5\z\y\n\3\8\3\t\q\e\m\4\7\l\k\3\t\8\e\9\j\n\f\t\2\f\s\a\l\t\u\h\v\2\j\f\3\3\d\z\1\i\5\o\o\7\t\1\j\c\i\c\v\6\h\5\s\6\s\h\9\w\z\k\q\7\w\t\6\n\k\g\h\u\3\j\9\s\3\3\a\4\i\3\9\r\m\r\u\i\c\v\0\h\o\o\3\8\m\w\g\l\4\9\y\p\p\j\w\i\4\h\u\x\a\a\r\d\l\i\a\u\9\r\b\f\s\r\g\h\m\h\x\m\a\1\3\d\v\j\s\e\f\t\3\v\i\i\e\h\7\j\k\f\8\x\m\y\j\5\p\9\l\t\p\y\v\b\0\p\p\g\k\q\c\x\y\2\t\9\4\2\5\3\j\z\r\l\x\n\k\y\w\b\u\4\g\0\e\3\f\o\n\e\1\e\o\a\5\y\m\i\g\n\b\v\2\1\v\w\1\z\o\i\q\5\7\1\p\8\s\g\w\j\j\0\7\n\z\j\8\s\p\6\e\2\r\v\t\7\a\b\q\c\0\q\v\p\y\9\1\w\x\p\9\j\b\w\9\g\3\l\t\z\s\g\z\v\f\l\1\p\f\q\a\t\u\i\2\k\g\l\x\j\o\5\n\c\s\m\m\m\8\7\3\2\3\c\b\6\w\7\j\3\w\z\q\e\4\v\l\r\y\a\r\6\q\s\w\e\o\1\i\l\f\0\f\6\x\e\t\5\1\3\n\r\n\i\o\e\4\7\m\r\r\x\2\d\g\k\b\e\w\o\j\h\0\o\d\z\j\4\9\5\5\k\k\5\q\f\2\r\r\f\h\n\r\6\6\r\3\a\8\w\c\g\q\v\j\m\4\n\s\f\9\4\c\c\m\r\o\x\8\i\5\0\v\f\k\e\p\9\a\l\q\f\a\k\r\n\h\7\9\k\s\p\i\m\2\j\4\w\1\b\1\s\5\t\7\d\v\u\h\1\9\x\m\6\b\f\j\7\m\3\r\2\r\o\8\n\7\b\3\w\z\c\1\3\o\v\q\v\l\8\z\i\r\2\s\t\c\i\i\2\6\v\u\3\5\h\b\i\k\t\j\n\f\d\n\w\f\j\3\j\3\g\e\l\8\g\3\2\o\e\i\j\i\z\z\v\f\c\u\q\y\z\6\t\l\q\6\e\c\8\2\e\u\5\p\x\4\i\5\0\q\e\7\f\4\d\8\6\y\p\s\q\b\m\u\3\d\6\5\d\c\9\k\z\p\2\k\r\1\x\8\8\d\3\q\h\r\x\s\e\i\f\m\y\1\z\p\f\u\0\u\3\d\m\5\w\z\p\h\y\x\t\n\o\n\r\j\7\p\x\q\j\x\1\j\2\3\4\o\0\g\p\4\4\u\v\2\0\f\2\5\c\d\t\o\3\s\u\2\g\x\4\9\v\o\p\y\c\4\e\u\d\y\u\a\i\4\l\i\j\v\j\g\0\v\8\y\p\a\5\9\4\7\o\w\v\3\y\b\3\p\4\4\q\r\3\n\a\r\l\n\6\i\l\r\3\a\z\t\w\b\e\z\9\d\z\h\q\q\r\d\1\i\4\v\8\w\f\9\y\l\d\i\f\8\s\1\d\l\5\6\j\s\6\j\x\k\j\k\2\q\4\n\v\i\q\d\j\h\7\7\u\4\9\m\m\u\e\n\k\4\m\c\1\f\c\6\0\q\m\4\j\0\h\d\i\h\b\x\j\d\e\o\t\y\y\v\2\q\l\6\n\c\z\j\o\g\h\n\l\w\b\7\r\l\s\x\b\r\s\2\g\3\5\9\y\6\p\7\o\t\n\g\i\q\d\u\9\x\v\y\f\5\8\0\o\w\d\m\j\j\i\z\k\d\p\w\u\8\d\y\2\v\m\w\2\d\c\e\g\4\7\h\g\9\t\q\8\w\2\f\m\6\0\c\d\h\w\o\0\y\p\w\8\o\d\6\v\s\h\u\a\u\6\5\m\e\o\h\7\p\f\2\8\a\j\m\r\u\k\i\r\r\5\l\d\h\1\h\y\7\c\y\f\2\4\7\1\n\o\e\f\5\m\t\0\4\3\1\z\j\k\h\e\a\f\0\8\p\f\s\e\h\n\a\m\9\1\d\7\u\e\u\g\h\x\0\5\1\y\1\i\g\l\c\9\s\m\j\o\o\v\g\y\m\r\0\8\o\s\t\f\f\9\p\7\r\d\4\v\m ]] 00:07:10.394 09:21:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:10.653 09:21:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:07:10.653 09:21:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:07:10.653 09:21:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:10.653 09:21:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:10.653 [2024-10-16 09:21:34.970308] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:07:10.653 [2024-10-16 09:21:34.970379] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61347 ] 00:07:10.653 { 00:07:10.653 "subsystems": [ 00:07:10.653 { 00:07:10.653 "subsystem": "bdev", 00:07:10.653 "config": [ 00:07:10.653 { 00:07:10.653 "params": { 00:07:10.653 "block_size": 512, 00:07:10.653 "num_blocks": 1048576, 00:07:10.653 "name": "malloc0" 00:07:10.653 }, 00:07:10.653 "method": "bdev_malloc_create" 00:07:10.653 }, 00:07:10.653 { 00:07:10.653 "params": { 00:07:10.653 "filename": "/dev/zram1", 00:07:10.653 "name": "uring0" 00:07:10.653 }, 00:07:10.653 "method": "bdev_uring_create" 00:07:10.653 }, 00:07:10.653 { 00:07:10.653 "method": "bdev_wait_for_examine" 00:07:10.653 } 00:07:10.653 ] 00:07:10.653 } 00:07:10.653 ] 00:07:10.653 } 00:07:10.912 [2024-10-16 09:21:35.101809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.912 [2024-10-16 09:21:35.155134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.912 [2024-10-16 09:21:35.211805] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:12.289  [2024-10-16T09:21:37.666Z] Copying: 155/512 [MB] (155 MBps) [2024-10-16T09:21:38.602Z] Copying: 292/512 [MB] (137 MBps) [2024-10-16T09:21:39.170Z] Copying: 431/512 [MB] (138 MBps) [2024-10-16T09:21:39.428Z] Copying: 512/512 [MB] (average 142 MBps) 00:07:15.024 00:07:15.024 09:21:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:07:15.024 09:21:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:07:15.024 09:21:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:07:15.024 09:21:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:15.024 09:21:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:15.024 09:21:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:07:15.024 09:21:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:15.024 09:21:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:15.283 [2024-10-16 09:21:39.435409] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:07:15.283 [2024-10-16 09:21:39.436187] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61409 ] 00:07:15.283 { 00:07:15.283 "subsystems": [ 00:07:15.283 { 00:07:15.283 "subsystem": "bdev", 00:07:15.283 "config": [ 00:07:15.283 { 00:07:15.283 "params": { 00:07:15.283 "block_size": 512, 00:07:15.283 "num_blocks": 1048576, 00:07:15.283 "name": "malloc0" 00:07:15.283 }, 00:07:15.283 "method": "bdev_malloc_create" 00:07:15.283 }, 00:07:15.283 { 00:07:15.283 "params": { 00:07:15.283 "filename": "/dev/zram1", 00:07:15.283 "name": "uring0" 00:07:15.283 }, 00:07:15.283 "method": "bdev_uring_create" 00:07:15.283 }, 00:07:15.283 { 00:07:15.283 "params": { 00:07:15.283 "name": "uring0" 00:07:15.283 }, 00:07:15.283 "method": "bdev_uring_delete" 00:07:15.283 }, 00:07:15.283 { 00:07:15.283 "method": "bdev_wait_for_examine" 00:07:15.283 } 00:07:15.283 ] 00:07:15.283 } 00:07:15.283 ] 00:07:15.283 } 00:07:15.283 [2024-10-16 09:21:39.575076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.283 [2024-10-16 09:21:39.630916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.283 [2024-10-16 09:21:39.686150] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:15.542  [2024-10-16T09:21:40.515Z] Copying: 0/0 [B] (average 0 Bps) 00:07:16.111 00:07:16.111 09:21:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:16.111 09:21:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # local es=0 00:07:16.111 09:21:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:16.111 09:21:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:16.111 09:21:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:07:16.111 09:21:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:07:16.111 09:21:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:16.111 09:21:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:16.111 09:21:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:16.111 09:21:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:16.111 09:21:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:16.111 09:21:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:16.111 09:21:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:16.111 09:21:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:16.111 09:21:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:16.111 09:21:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:16.111 [2024-10-16 09:21:40.329444] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:07:16.111 [2024-10-16 09:21:40.329558] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61438 ] 00:07:16.111 { 00:07:16.111 "subsystems": [ 00:07:16.111 { 00:07:16.111 "subsystem": "bdev", 00:07:16.111 "config": [ 00:07:16.111 { 00:07:16.111 "params": { 00:07:16.111 "block_size": 512, 00:07:16.111 "num_blocks": 1048576, 00:07:16.111 "name": "malloc0" 00:07:16.111 }, 00:07:16.111 "method": "bdev_malloc_create" 00:07:16.111 }, 00:07:16.111 { 00:07:16.111 "params": { 00:07:16.111 "filename": "/dev/zram1", 00:07:16.111 "name": "uring0" 00:07:16.111 }, 00:07:16.111 "method": "bdev_uring_create" 00:07:16.111 }, 00:07:16.111 { 00:07:16.111 "params": { 00:07:16.111 "name": "uring0" 00:07:16.111 }, 00:07:16.111 "method": "bdev_uring_delete" 00:07:16.111 }, 00:07:16.111 { 00:07:16.111 "method": "bdev_wait_for_examine" 00:07:16.111 } 00:07:16.111 ] 00:07:16.111 } 00:07:16.111 ] 00:07:16.111 } 00:07:16.111 [2024-10-16 09:21:40.470097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.370 [2024-10-16 09:21:40.522265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.370 [2024-10-16 09:21:40.577241] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:16.630 [2024-10-16 09:21:40.780890] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:07:16.630 [2024-10-16 09:21:40.780946] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:07:16.630 [2024-10-16 09:21:40.780958] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:07:16.630 [2024-10-16 09:21:40.780968] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:16.888 [2024-10-16 09:21:41.097227] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:16.888 09:21:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # es=237 00:07:16.888 09:21:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:16.888 09:21:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@662 -- # es=109 00:07:16.889 09:21:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # case "$es" in 00:07:16.889 09:21:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@670 -- # es=1 00:07:16.889 09:21:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:16.889 09:21:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:07:16.889 09:21:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:07:16.889 09:21:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:07:16.889 09:21:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:07:16.889 09:21:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:07:16.889 09:21:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:17.147 00:07:17.147 real 0m14.856s 00:07:17.147 user 0m10.065s 00:07:17.147 sys 0m12.190s 00:07:17.147 09:21:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:17.147 09:21:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:17.147 ************************************ 00:07:17.147 END TEST dd_uring_copy 00:07:17.147 ************************************ 00:07:17.147 00:07:17.147 real 0m15.098s 00:07:17.147 user 0m10.203s 00:07:17.147 sys 0m12.298s 00:07:17.147 09:21:41 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:17.147 09:21:41 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:17.147 ************************************ 00:07:17.147 END TEST spdk_dd_uring 00:07:17.147 ************************************ 00:07:17.147 09:21:41 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:17.147 09:21:41 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:17.147 09:21:41 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:17.147 09:21:41 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:17.147 ************************************ 00:07:17.147 START TEST spdk_dd_sparse 00:07:17.148 ************************************ 00:07:17.148 09:21:41 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:17.407 * Looking for test storage... 00:07:17.407 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:17.407 09:21:41 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:17.407 09:21:41 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1691 -- # lcov --version 00:07:17.407 09:21:41 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:17.407 09:21:41 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:17.407 09:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:17.407 09:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:17.407 09:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:17.407 09:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:17.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.408 --rc genhtml_branch_coverage=1 00:07:17.408 --rc genhtml_function_coverage=1 00:07:17.408 --rc genhtml_legend=1 00:07:17.408 --rc geninfo_all_blocks=1 00:07:17.408 --rc geninfo_unexecuted_blocks=1 00:07:17.408 00:07:17.408 ' 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:17.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.408 --rc genhtml_branch_coverage=1 00:07:17.408 --rc genhtml_function_coverage=1 00:07:17.408 --rc genhtml_legend=1 00:07:17.408 --rc geninfo_all_blocks=1 00:07:17.408 --rc geninfo_unexecuted_blocks=1 00:07:17.408 00:07:17.408 ' 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:17.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.408 --rc genhtml_branch_coverage=1 00:07:17.408 --rc genhtml_function_coverage=1 00:07:17.408 --rc genhtml_legend=1 00:07:17.408 --rc geninfo_all_blocks=1 00:07:17.408 --rc geninfo_unexecuted_blocks=1 00:07:17.408 00:07:17.408 ' 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:17.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.408 --rc genhtml_branch_coverage=1 00:07:17.408 --rc genhtml_function_coverage=1 00:07:17.408 --rc genhtml_legend=1 00:07:17.408 --rc geninfo_all_blocks=1 00:07:17.408 --rc geninfo_unexecuted_blocks=1 00:07:17.408 00:07:17.408 ' 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:07:17.408 1+0 records in 00:07:17.408 1+0 records out 00:07:17.408 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00768898 s, 545 MB/s 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:07:17.408 1+0 records in 00:07:17.408 1+0 records out 00:07:17.408 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00879812 s, 477 MB/s 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:07:17.408 1+0 records in 00:07:17.408 1+0 records out 00:07:17.408 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00473272 s, 886 MB/s 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:17.408 ************************************ 00:07:17.408 START TEST dd_sparse_file_to_file 00:07:17.408 ************************************ 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1125 -- # file_to_file 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:17.408 09:21:41 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:17.668 [2024-10-16 09:21:41.820375] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:07:17.668 [2024-10-16 09:21:41.820849] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61537 ] 00:07:17.668 { 00:07:17.668 "subsystems": [ 00:07:17.668 { 00:07:17.668 "subsystem": "bdev", 00:07:17.668 "config": [ 00:07:17.668 { 00:07:17.668 "params": { 00:07:17.668 "block_size": 4096, 00:07:17.668 "filename": "dd_sparse_aio_disk", 00:07:17.668 "name": "dd_aio" 00:07:17.668 }, 00:07:17.668 "method": "bdev_aio_create" 00:07:17.668 }, 00:07:17.668 { 00:07:17.668 "params": { 00:07:17.668 "lvs_name": "dd_lvstore", 00:07:17.668 "bdev_name": "dd_aio" 00:07:17.668 }, 00:07:17.668 "method": "bdev_lvol_create_lvstore" 00:07:17.668 }, 00:07:17.668 { 00:07:17.668 "method": "bdev_wait_for_examine" 00:07:17.668 } 00:07:17.668 ] 00:07:17.668 } 00:07:17.668 ] 00:07:17.668 } 00:07:17.668 [2024-10-16 09:21:41.959859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.668 [2024-10-16 09:21:42.018837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.927 [2024-10-16 09:21:42.078453] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:17.927  [2024-10-16T09:21:42.590Z] Copying: 12/36 [MB] (average 857 MBps) 00:07:18.186 00:07:18.186 09:21:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:07:18.186 09:21:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:07:18.186 09:21:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:07:18.186 09:21:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:07:18.186 09:21:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:18.186 09:21:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:07:18.186 09:21:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:07:18.186 09:21:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:07:18.186 09:21:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:07:18.186 09:21:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:18.186 00:07:18.186 real 0m0.663s 00:07:18.186 user 0m0.392s 00:07:18.186 sys 0m0.372s 00:07:18.186 09:21:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:18.186 09:21:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:18.186 ************************************ 00:07:18.186 END TEST dd_sparse_file_to_file 00:07:18.186 ************************************ 00:07:18.186 09:21:42 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:07:18.186 09:21:42 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:18.186 09:21:42 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:18.186 09:21:42 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:18.186 ************************************ 00:07:18.186 START TEST dd_sparse_file_to_bdev 00:07:18.186 ************************************ 00:07:18.186 09:21:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1125 -- # file_to_bdev 00:07:18.186 09:21:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:18.186 09:21:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:07:18.186 09:21:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:07:18.186 09:21:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:07:18.186 09:21:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:07:18.186 09:21:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:07:18.186 09:21:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:18.186 09:21:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:18.186 [2024-10-16 09:21:42.536061] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:07:18.186 [2024-10-16 09:21:42.536161] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61580 ] 00:07:18.186 { 00:07:18.186 "subsystems": [ 00:07:18.186 { 00:07:18.186 "subsystem": "bdev", 00:07:18.186 "config": [ 00:07:18.186 { 00:07:18.186 "params": { 00:07:18.186 "block_size": 4096, 00:07:18.186 "filename": "dd_sparse_aio_disk", 00:07:18.186 "name": "dd_aio" 00:07:18.186 }, 00:07:18.186 "method": "bdev_aio_create" 00:07:18.186 }, 00:07:18.186 { 00:07:18.186 "params": { 00:07:18.186 "lvs_name": "dd_lvstore", 00:07:18.186 "lvol_name": "dd_lvol", 00:07:18.186 "size_in_mib": 36, 00:07:18.186 "thin_provision": true 00:07:18.186 }, 00:07:18.186 "method": "bdev_lvol_create" 00:07:18.186 }, 00:07:18.186 { 00:07:18.186 "method": "bdev_wait_for_examine" 00:07:18.186 } 00:07:18.186 ] 00:07:18.186 } 00:07:18.186 ] 00:07:18.186 } 00:07:18.445 [2024-10-16 09:21:42.676990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.445 [2024-10-16 09:21:42.735994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.445 [2024-10-16 09:21:42.794236] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:18.704  [2024-10-16T09:21:43.108Z] Copying: 12/36 [MB] (average 500 MBps) 00:07:18.704 00:07:18.704 00:07:18.704 real 0m0.621s 00:07:18.704 user 0m0.398s 00:07:18.704 sys 0m0.335s 00:07:18.704 09:21:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:18.704 ************************************ 00:07:18.704 END TEST dd_sparse_file_to_bdev 00:07:18.704 ************************************ 00:07:18.704 09:21:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:18.962 09:21:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:07:18.962 09:21:43 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:18.962 09:21:43 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:18.962 09:21:43 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:18.962 ************************************ 00:07:18.962 START TEST dd_sparse_bdev_to_file 00:07:18.962 ************************************ 00:07:18.962 09:21:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1125 -- # bdev_to_file 00:07:18.962 09:21:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:07:18.962 09:21:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:07:18.962 09:21:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:18.962 09:21:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:07:18.962 09:21:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:07:18.962 09:21:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:07:18.962 09:21:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:18.963 09:21:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:18.963 { 00:07:18.963 "subsystems": [ 00:07:18.963 { 00:07:18.963 "subsystem": "bdev", 00:07:18.963 "config": [ 00:07:18.963 { 00:07:18.963 "params": { 00:07:18.963 "block_size": 4096, 00:07:18.963 "filename": "dd_sparse_aio_disk", 00:07:18.963 "name": "dd_aio" 00:07:18.963 }, 00:07:18.963 "method": "bdev_aio_create" 00:07:18.963 }, 00:07:18.963 { 00:07:18.963 "method": "bdev_wait_for_examine" 00:07:18.963 } 00:07:18.963 ] 00:07:18.963 } 00:07:18.963 ] 00:07:18.963 } 00:07:18.963 [2024-10-16 09:21:43.212701] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:07:18.963 [2024-10-16 09:21:43.212804] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61612 ] 00:07:18.963 [2024-10-16 09:21:43.352974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.221 [2024-10-16 09:21:43.413758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.221 [2024-10-16 09:21:43.471072] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:19.221  [2024-10-16T09:21:43.884Z] Copying: 12/36 [MB] (average 857 MBps) 00:07:19.480 00:07:19.480 09:21:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:07:19.480 09:21:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:07:19.481 09:21:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:07:19.481 09:21:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:07:19.481 09:21:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:19.481 09:21:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:07:19.481 09:21:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:07:19.481 09:21:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:07:19.481 ************************************ 00:07:19.481 END TEST dd_sparse_bdev_to_file 00:07:19.481 ************************************ 00:07:19.481 09:21:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:07:19.481 09:21:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:19.481 00:07:19.481 real 0m0.636s 00:07:19.481 user 0m0.388s 00:07:19.481 sys 0m0.352s 00:07:19.481 09:21:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:19.481 09:21:43 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:19.481 09:21:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:07:19.481 09:21:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:07:19.481 09:21:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:07:19.481 09:21:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:07:19.481 09:21:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:07:19.481 ************************************ 00:07:19.481 END TEST spdk_dd_sparse 00:07:19.481 ************************************ 00:07:19.481 00:07:19.481 real 0m2.333s 00:07:19.481 user 0m1.369s 00:07:19.481 sys 0m1.280s 00:07:19.481 09:21:43 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:19.481 09:21:43 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:19.741 09:21:43 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:19.741 09:21:43 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:19.741 09:21:43 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:19.741 09:21:43 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:19.741 ************************************ 00:07:19.741 START TEST spdk_dd_negative 00:07:19.741 ************************************ 00:07:19.741 09:21:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:19.741 * Looking for test storage... 00:07:19.741 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:19.741 09:21:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:19.741 09:21:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1691 -- # lcov --version 00:07:19.741 09:21:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:19.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.741 --rc genhtml_branch_coverage=1 00:07:19.741 --rc genhtml_function_coverage=1 00:07:19.741 --rc genhtml_legend=1 00:07:19.741 --rc geninfo_all_blocks=1 00:07:19.741 --rc geninfo_unexecuted_blocks=1 00:07:19.741 00:07:19.741 ' 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:19.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.741 --rc genhtml_branch_coverage=1 00:07:19.741 --rc genhtml_function_coverage=1 00:07:19.741 --rc genhtml_legend=1 00:07:19.741 --rc geninfo_all_blocks=1 00:07:19.741 --rc geninfo_unexecuted_blocks=1 00:07:19.741 00:07:19.741 ' 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:19.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.741 --rc genhtml_branch_coverage=1 00:07:19.741 --rc genhtml_function_coverage=1 00:07:19.741 --rc genhtml_legend=1 00:07:19.741 --rc geninfo_all_blocks=1 00:07:19.741 --rc geninfo_unexecuted_blocks=1 00:07:19.741 00:07:19.741 ' 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:19.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.741 --rc genhtml_branch_coverage=1 00:07:19.741 --rc genhtml_function_coverage=1 00:07:19.741 --rc genhtml_legend=1 00:07:19.741 --rc geninfo_all_blocks=1 00:07:19.741 --rc geninfo_unexecuted_blocks=1 00:07:19.741 00:07:19.741 ' 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:19.741 ************************************ 00:07:19.741 START TEST dd_invalid_arguments 00:07:19.741 ************************************ 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1125 -- # invalid_arguments 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # local es=0 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:19.741 09:21:44 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:20.001 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:07:20.001 00:07:20.001 CPU options: 00:07:20.001 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:07:20.001 (like [0,1,10]) 00:07:20.001 --lcores lcore to CPU mapping list. The list is in the format: 00:07:20.001 [<,lcores[@CPUs]>...] 00:07:20.001 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:20.001 Within the group, '-' is used for range separator, 00:07:20.001 ',' is used for single number separator. 00:07:20.001 '( )' can be omitted for single element group, 00:07:20.001 '@' can be omitted if cpus and lcores have the same value 00:07:20.001 --disable-cpumask-locks Disable CPU core lock files. 00:07:20.001 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:07:20.001 pollers in the app support interrupt mode) 00:07:20.001 -p, --main-core main (primary) core for DPDK 00:07:20.001 00:07:20.001 Configuration options: 00:07:20.001 -c, --config, --json JSON config file 00:07:20.001 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:20.001 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:07:20.001 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:20.001 --rpcs-allowed comma-separated list of permitted RPCS 00:07:20.001 --json-ignore-init-errors don't exit on invalid config entry 00:07:20.001 00:07:20.001 Memory options: 00:07:20.001 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:20.001 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:20.001 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:20.001 -R, --huge-unlink unlink huge files after initialization 00:07:20.001 -n, --mem-channels number of memory channels used for DPDK 00:07:20.001 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:20.001 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:20.001 --no-huge run without using hugepages 00:07:20.001 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:07:20.001 -i, --shm-id shared memory ID (optional) 00:07:20.001 -g, --single-file-segments force creating just one hugetlbfs file 00:07:20.001 00:07:20.001 PCI options: 00:07:20.001 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:20.001 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:20.001 -u, --no-pci disable PCI access 00:07:20.001 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:20.001 00:07:20.001 Log options: 00:07:20.001 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:07:20.001 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:07:20.001 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:07:20.001 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:07:20.001 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:07:20.001 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:07:20.001 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:07:20.001 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:07:20.001 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:07:20.001 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:07:20.001 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:07:20.001 --silence-noticelog disable notice level logging to stderr 00:07:20.001 00:07:20.001 Trace options: 00:07:20.001 --num-trace-entries number of trace entries for each core, must be power of 2, 00:07:20.001 setting 0 to disable trace (default 32768) 00:07:20.002 Tracepoints vary in size and can use more than one trace entry. 00:07:20.002 -e, --tpoint-group [:] 00:07:20.002 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:07:20.002 [2024-10-16 09:21:44.172759] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:07:20.002 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:07:20.002 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:07:20.002 bdev_raid, scheduler, all). 00:07:20.002 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:07:20.002 a tracepoint group. First tpoint inside a group can be enabled by 00:07:20.002 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:07:20.002 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:07:20.002 in /include/spdk_internal/trace_defs.h 00:07:20.002 00:07:20.002 Other options: 00:07:20.002 -h, --help show this usage 00:07:20.002 -v, --version print SPDK version 00:07:20.002 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:20.002 --env-context Opaque context for use of the env implementation 00:07:20.002 00:07:20.002 Application specific: 00:07:20.002 [--------- DD Options ---------] 00:07:20.002 --if Input file. Must specify either --if or --ib. 00:07:20.002 --ib Input bdev. Must specifier either --if or --ib 00:07:20.002 --of Output file. Must specify either --of or --ob. 00:07:20.002 --ob Output bdev. Must specify either --of or --ob. 00:07:20.002 --iflag Input file flags. 00:07:20.002 --oflag Output file flags. 00:07:20.002 --bs I/O unit size (default: 4096) 00:07:20.002 --qd Queue depth (default: 2) 00:07:20.002 --count I/O unit count. The number of I/O units to copy. (default: all) 00:07:20.002 --skip Skip this many I/O units at start of input. (default: 0) 00:07:20.002 --seek Skip this many I/O units at start of output. (default: 0) 00:07:20.002 --aio Force usage of AIO. (by default io_uring is used if available) 00:07:20.002 --sparse Enable hole skipping in input target 00:07:20.002 Available iflag and oflag values: 00:07:20.002 append - append mode 00:07:20.002 direct - use direct I/O for data 00:07:20.002 directory - fail unless a directory 00:07:20.002 dsync - use synchronized I/O for data 00:07:20.002 noatime - do not update access time 00:07:20.002 noctty - do not assign controlling terminal from file 00:07:20.002 nofollow - do not follow symlinks 00:07:20.002 nonblock - use non-blocking I/O 00:07:20.002 sync - use synchronized I/O for data and metadata 00:07:20.002 ************************************ 00:07:20.002 END TEST dd_invalid_arguments 00:07:20.002 ************************************ 00:07:20.002 09:21:44 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # es=2 00:07:20.002 09:21:44 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:20.002 09:21:44 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:20.002 09:21:44 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:20.002 00:07:20.002 real 0m0.079s 00:07:20.002 user 0m0.040s 00:07:20.002 sys 0m0.035s 00:07:20.002 09:21:44 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.002 09:21:44 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:07:20.002 09:21:44 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:07:20.002 09:21:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:20.002 09:21:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.002 09:21:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:20.002 ************************************ 00:07:20.002 START TEST dd_double_input 00:07:20.002 ************************************ 00:07:20.002 09:21:44 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1125 -- # double_input 00:07:20.002 09:21:44 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:20.002 09:21:44 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # local es=0 00:07:20.002 09:21:44 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:20.002 09:21:44 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.002 09:21:44 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.002 09:21:44 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.002 09:21:44 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.002 09:21:44 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.002 09:21:44 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.002 09:21:44 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.002 09:21:44 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:20.002 09:21:44 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:20.002 [2024-10-16 09:21:44.311895] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:07:20.002 09:21:44 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # es=22 00:07:20.002 09:21:44 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:20.002 09:21:44 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:20.002 09:21:44 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:20.002 00:07:20.002 real 0m0.089s 00:07:20.002 user 0m0.052s 00:07:20.002 sys 0m0.033s 00:07:20.002 09:21:44 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.002 09:21:44 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:07:20.002 ************************************ 00:07:20.002 END TEST dd_double_input 00:07:20.002 ************************************ 00:07:20.002 09:21:44 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:07:20.002 09:21:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:20.002 09:21:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.002 09:21:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:20.002 ************************************ 00:07:20.002 START TEST dd_double_output 00:07:20.002 ************************************ 00:07:20.002 09:21:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1125 -- # double_output 00:07:20.002 09:21:44 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:20.002 09:21:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # local es=0 00:07:20.002 09:21:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:20.002 09:21:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.002 09:21:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.002 09:21:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.002 09:21:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.002 09:21:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.002 09:21:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.002 09:21:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.002 09:21:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:20.002 09:21:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:20.261 [2024-10-16 09:21:44.443715] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:07:20.261 09:21:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # es=22 00:07:20.261 09:21:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:20.261 09:21:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:20.261 ************************************ 00:07:20.261 END TEST dd_double_output 00:07:20.261 ************************************ 00:07:20.261 09:21:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:20.261 00:07:20.261 real 0m0.079s 00:07:20.261 user 0m0.049s 00:07:20.261 sys 0m0.028s 00:07:20.261 09:21:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.261 09:21:44 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:07:20.261 09:21:44 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:07:20.261 09:21:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:20.261 09:21:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.261 09:21:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:20.261 ************************************ 00:07:20.261 START TEST dd_no_input 00:07:20.261 ************************************ 00:07:20.261 09:21:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1125 -- # no_input 00:07:20.261 09:21:44 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:20.261 09:21:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # local es=0 00:07:20.261 09:21:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:20.262 09:21:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.262 09:21:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.262 09:21:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.262 09:21:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.262 09:21:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.262 09:21:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.262 09:21:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.262 09:21:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:20.262 09:21:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:20.262 [2024-10-16 09:21:44.575984] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:07:20.262 09:21:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # es=22 00:07:20.262 09:21:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:20.262 ************************************ 00:07:20.262 END TEST dd_no_input 00:07:20.262 ************************************ 00:07:20.262 09:21:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:20.262 09:21:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:20.262 00:07:20.262 real 0m0.078s 00:07:20.262 user 0m0.049s 00:07:20.262 sys 0m0.027s 00:07:20.262 09:21:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.262 09:21:44 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:07:20.262 09:21:44 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:07:20.262 09:21:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:20.262 09:21:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.262 09:21:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:20.262 ************************************ 00:07:20.262 START TEST dd_no_output 00:07:20.262 ************************************ 00:07:20.262 09:21:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1125 -- # no_output 00:07:20.262 09:21:44 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:20.262 09:21:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # local es=0 00:07:20.262 09:21:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:20.262 09:21:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.262 09:21:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.262 09:21:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.262 09:21:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.262 09:21:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.262 09:21:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.262 09:21:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.262 09:21:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:20.262 09:21:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:20.521 [2024-10-16 09:21:44.699730] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:07:20.521 09:21:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # es=22 00:07:20.521 09:21:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:20.521 09:21:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:20.521 09:21:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:20.521 00:07:20.521 real 0m0.071s 00:07:20.521 user 0m0.044s 00:07:20.521 sys 0m0.025s 00:07:20.521 09:21:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.521 09:21:44 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:07:20.521 ************************************ 00:07:20.521 END TEST dd_no_output 00:07:20.521 ************************************ 00:07:20.521 09:21:44 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:07:20.521 09:21:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:20.521 09:21:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.521 09:21:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:20.521 ************************************ 00:07:20.521 START TEST dd_wrong_blocksize 00:07:20.521 ************************************ 00:07:20.521 09:21:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1125 -- # wrong_blocksize 00:07:20.521 09:21:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:20.521 09:21:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:07:20.521 09:21:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:20.521 09:21:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.521 09:21:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.521 09:21:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.521 09:21:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.521 09:21:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.521 09:21:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.521 09:21:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.521 09:21:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:20.521 09:21:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:20.521 [2024-10-16 09:21:44.824593] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:07:20.521 09:21:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # es=22 00:07:20.521 09:21:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:20.521 09:21:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:20.521 09:21:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:20.521 00:07:20.521 real 0m0.076s 00:07:20.521 user 0m0.049s 00:07:20.521 sys 0m0.026s 00:07:20.521 09:21:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.521 09:21:44 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:20.521 ************************************ 00:07:20.521 END TEST dd_wrong_blocksize 00:07:20.521 ************************************ 00:07:20.521 09:21:44 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:07:20.521 09:21:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:20.521 09:21:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.521 09:21:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:20.521 ************************************ 00:07:20.521 START TEST dd_smaller_blocksize 00:07:20.521 ************************************ 00:07:20.521 09:21:44 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1125 -- # smaller_blocksize 00:07:20.521 09:21:44 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:20.521 09:21:44 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:07:20.521 09:21:44 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:20.521 09:21:44 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.521 09:21:44 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.521 09:21:44 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.521 09:21:44 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.521 09:21:44 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.521 09:21:44 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.521 09:21:44 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.521 09:21:44 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:20.521 09:21:44 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:20.781 [2024-10-16 09:21:44.946359] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:07:20.781 [2024-10-16 09:21:44.946439] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61844 ] 00:07:20.781 [2024-10-16 09:21:45.086312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.781 [2024-10-16 09:21:45.142689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.039 [2024-10-16 09:21:45.202018] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:21.298 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:21.557 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:21.557 [2024-10-16 09:21:45.826713] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:07:21.557 [2024-10-16 09:21:45.826774] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:21.557 [2024-10-16 09:21:45.948182] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:21.816 09:21:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # es=244 00:07:21.816 09:21:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:21.816 09:21:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@662 -- # es=116 00:07:21.816 09:21:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # case "$es" in 00:07:21.816 09:21:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@670 -- # es=1 00:07:21.816 ************************************ 00:07:21.816 END TEST dd_smaller_blocksize 00:07:21.816 ************************************ 00:07:21.816 09:21:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:21.816 00:07:21.816 real 0m1.119s 00:07:21.816 user 0m0.402s 00:07:21.816 sys 0m0.609s 00:07:21.816 09:21:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:21.816 09:21:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:21.816 09:21:46 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:07:21.816 09:21:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:21.816 09:21:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:21.816 09:21:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:21.816 ************************************ 00:07:21.816 START TEST dd_invalid_count 00:07:21.816 ************************************ 00:07:21.816 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1125 -- # invalid_count 00:07:21.816 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:21.816 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # local es=0 00:07:21.816 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:21.816 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.816 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:21.816 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.816 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:21.816 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.816 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:21.816 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.816 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:21.816 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:21.816 [2024-10-16 09:21:46.115752] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:07:21.816 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # es=22 00:07:21.816 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:21.816 ************************************ 00:07:21.816 END TEST dd_invalid_count 00:07:21.816 ************************************ 00:07:21.816 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:21.816 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:21.816 00:07:21.816 real 0m0.076s 00:07:21.816 user 0m0.045s 00:07:21.816 sys 0m0.030s 00:07:21.816 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:21.816 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:07:21.817 09:21:46 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:07:21.817 09:21:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:21.817 09:21:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:21.817 09:21:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:21.817 ************************************ 00:07:21.817 START TEST dd_invalid_oflag 00:07:21.817 ************************************ 00:07:21.817 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1125 -- # invalid_oflag 00:07:21.817 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:21.817 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # local es=0 00:07:21.817 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:21.817 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.817 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:21.817 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.817 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:21.817 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.817 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:21.817 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.817 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:21.817 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:22.076 [2024-10-16 09:21:46.244621] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:07:22.076 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # es=22 00:07:22.076 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:22.076 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:22.076 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:22.076 ************************************ 00:07:22.076 END TEST dd_invalid_oflag 00:07:22.076 ************************************ 00:07:22.076 00:07:22.076 real 0m0.086s 00:07:22.076 user 0m0.058s 00:07:22.076 sys 0m0.026s 00:07:22.076 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:22.076 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:07:22.076 09:21:46 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:07:22.076 09:21:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:22.076 09:21:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:22.076 09:21:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:22.076 ************************************ 00:07:22.076 START TEST dd_invalid_iflag 00:07:22.076 ************************************ 00:07:22.076 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1125 -- # invalid_iflag 00:07:22.076 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:22.076 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # local es=0 00:07:22.076 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:22.076 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:22.076 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:22.076 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:22.076 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:22.076 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:22.076 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:22.076 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:22.076 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:22.076 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:22.076 [2024-10-16 09:21:46.377955] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:07:22.076 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # es=22 00:07:22.076 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:22.076 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:22.076 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:22.076 00:07:22.076 real 0m0.078s 00:07:22.076 user 0m0.051s 00:07:22.076 sys 0m0.025s 00:07:22.076 ************************************ 00:07:22.076 END TEST dd_invalid_iflag 00:07:22.076 ************************************ 00:07:22.076 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:22.076 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:07:22.076 09:21:46 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:07:22.076 09:21:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:22.076 09:21:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:22.076 09:21:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:22.076 ************************************ 00:07:22.076 START TEST dd_unknown_flag 00:07:22.076 ************************************ 00:07:22.076 09:21:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1125 -- # unknown_flag 00:07:22.077 09:21:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:22.077 09:21:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # local es=0 00:07:22.077 09:21:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:22.077 09:21:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:22.077 09:21:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:22.077 09:21:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:22.077 09:21:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:22.077 09:21:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:22.077 09:21:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:22.077 09:21:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:22.077 09:21:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:22.077 09:21:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:22.336 [2024-10-16 09:21:46.493244] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:07:22.336 [2024-10-16 09:21:46.493473] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61942 ] 00:07:22.336 [2024-10-16 09:21:46.622111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.336 [2024-10-16 09:21:46.663164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.336 [2024-10-16 09:21:46.717705] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:22.595 [2024-10-16 09:21:46.754785] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:22.595 [2024-10-16 09:21:46.754838] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:22.595 [2024-10-16 09:21:46.754926] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:22.595 [2024-10-16 09:21:46.754938] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:22.595 [2024-10-16 09:21:46.755187] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:07:22.595 [2024-10-16 09:21:46.755219] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:22.595 [2024-10-16 09:21:46.755294] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:22.595 [2024-10-16 09:21:46.755304] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:22.595 [2024-10-16 09:21:46.872633] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:22.595 09:21:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # es=234 00:07:22.595 09:21:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:22.595 09:21:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@662 -- # es=106 00:07:22.595 09:21:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # case "$es" in 00:07:22.595 09:21:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@670 -- # es=1 00:07:22.595 09:21:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:22.595 00:07:22.595 real 0m0.482s 00:07:22.595 user 0m0.245s 00:07:22.595 sys 0m0.143s 00:07:22.595 09:21:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:22.595 09:21:46 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:07:22.595 ************************************ 00:07:22.595 END TEST dd_unknown_flag 00:07:22.595 ************************************ 00:07:22.595 09:21:46 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:07:22.595 09:21:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:22.595 09:21:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:22.595 09:21:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:22.595 ************************************ 00:07:22.595 START TEST dd_invalid_json 00:07:22.595 ************************************ 00:07:22.595 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1125 -- # invalid_json 00:07:22.595 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:07:22.595 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:22.595 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # local es=0 00:07:22.595 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:22.595 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:22.595 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:22.595 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:22.595 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:22.595 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:22.595 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:22.595 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:22.595 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:22.595 09:21:46 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:22.855 [2024-10-16 09:21:47.040956] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:07:22.855 [2024-10-16 09:21:47.041198] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61969 ] 00:07:22.855 [2024-10-16 09:21:47.176403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.855 [2024-10-16 09:21:47.217023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.855 [2024-10-16 09:21:47.217100] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:07:22.855 [2024-10-16 09:21:47.217113] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:22.855 [2024-10-16 09:21:47.217121] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:22.855 [2024-10-16 09:21:47.217154] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:23.114 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # es=234 00:07:23.114 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:23.114 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@662 -- # es=106 00:07:23.114 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # case "$es" in 00:07:23.114 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@670 -- # es=1 00:07:23.114 ************************************ 00:07:23.114 END TEST dd_invalid_json 00:07:23.114 ************************************ 00:07:23.114 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:23.114 00:07:23.114 real 0m0.288s 00:07:23.114 user 0m0.124s 00:07:23.114 sys 0m0.063s 00:07:23.114 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:23.114 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:07:23.114 09:21:47 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:07:23.114 09:21:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:23.114 09:21:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:23.114 09:21:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:23.114 ************************************ 00:07:23.114 START TEST dd_invalid_seek 00:07:23.114 ************************************ 00:07:23.114 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1125 -- # invalid_seek 00:07:23.114 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:23.114 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:23.114 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:07:23.114 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:23.114 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:23.114 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:07:23.114 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:07:23.114 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@650 -- # local es=0 00:07:23.114 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:07:23.114 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:07:23.114 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.114 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:07:23.114 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:07:23.114 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:23.114 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.114 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:23.114 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.114 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:23.114 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.114 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:23.114 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:07:23.114 { 00:07:23.114 "subsystems": [ 00:07:23.114 { 00:07:23.114 "subsystem": "bdev", 00:07:23.114 "config": [ 00:07:23.114 { 00:07:23.114 "params": { 00:07:23.114 "block_size": 512, 00:07:23.114 "num_blocks": 512, 00:07:23.114 "name": "malloc0" 00:07:23.114 }, 00:07:23.114 "method": "bdev_malloc_create" 00:07:23.114 }, 00:07:23.114 { 00:07:23.114 "params": { 00:07:23.114 "block_size": 512, 00:07:23.114 "num_blocks": 512, 00:07:23.114 "name": "malloc1" 00:07:23.114 }, 00:07:23.114 "method": "bdev_malloc_create" 00:07:23.114 }, 00:07:23.114 { 00:07:23.114 "method": "bdev_wait_for_examine" 00:07:23.114 } 00:07:23.114 ] 00:07:23.114 } 00:07:23.114 ] 00:07:23.114 } 00:07:23.114 [2024-10-16 09:21:47.381000] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:07:23.114 [2024-10-16 09:21:47.381092] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61994 ] 00:07:23.374 [2024-10-16 09:21:47.517917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.374 [2024-10-16 09:21:47.558307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.374 [2024-10-16 09:21:47.612685] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:23.374 [2024-10-16 09:21:47.673590] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:07:23.374 [2024-10-16 09:21:47.673653] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:23.633 [2024-10-16 09:21:47.785304] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:23.633 ************************************ 00:07:23.633 END TEST dd_invalid_seek 00:07:23.633 ************************************ 00:07:23.633 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # es=228 00:07:23.633 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:23.633 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@662 -- # es=100 00:07:23.633 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # case "$es" in 00:07:23.633 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@670 -- # es=1 00:07:23.633 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:23.633 00:07:23.633 real 0m0.523s 00:07:23.633 user 0m0.319s 00:07:23.633 sys 0m0.158s 00:07:23.633 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:23.633 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:07:23.633 09:21:47 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:07:23.633 09:21:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:23.633 09:21:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:23.633 09:21:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:23.633 ************************************ 00:07:23.633 START TEST dd_invalid_skip 00:07:23.633 ************************************ 00:07:23.633 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1125 -- # invalid_skip 00:07:23.633 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:23.633 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:23.633 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:07:23.633 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:23.633 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:23.633 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:07:23.633 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:07:23.633 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@650 -- # local es=0 00:07:23.633 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:07:23.633 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:07:23.633 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.633 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:07:23.633 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:07:23.633 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:23.633 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.633 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:23.633 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.633 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:23.633 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:23.633 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:23.633 09:21:47 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:07:23.633 [2024-10-16 09:21:47.976045] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:07:23.633 [2024-10-16 09:21:47.976171] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62028 ] 00:07:23.633 { 00:07:23.633 "subsystems": [ 00:07:23.633 { 00:07:23.633 "subsystem": "bdev", 00:07:23.633 "config": [ 00:07:23.633 { 00:07:23.633 "params": { 00:07:23.633 "block_size": 512, 00:07:23.633 "num_blocks": 512, 00:07:23.633 "name": "malloc0" 00:07:23.633 }, 00:07:23.633 "method": "bdev_malloc_create" 00:07:23.633 }, 00:07:23.633 { 00:07:23.633 "params": { 00:07:23.633 "block_size": 512, 00:07:23.633 "num_blocks": 512, 00:07:23.633 "name": "malloc1" 00:07:23.633 }, 00:07:23.633 "method": "bdev_malloc_create" 00:07:23.633 }, 00:07:23.633 { 00:07:23.633 "method": "bdev_wait_for_examine" 00:07:23.633 } 00:07:23.633 ] 00:07:23.633 } 00:07:23.633 ] 00:07:23.633 } 00:07:23.892 [2024-10-16 09:21:48.116932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.892 [2024-10-16 09:21:48.160702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.892 [2024-10-16 09:21:48.212977] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:23.892 [2024-10-16 09:21:48.271558] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:07:23.892 [2024-10-16 09:21:48.271636] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:24.152 [2024-10-16 09:21:48.383243] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:24.152 09:21:48 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # es=228 00:07:24.152 09:21:48 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:24.152 09:21:48 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@662 -- # es=100 00:07:24.152 09:21:48 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # case "$es" in 00:07:24.152 09:21:48 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@670 -- # es=1 00:07:24.152 ************************************ 00:07:24.152 END TEST dd_invalid_skip 00:07:24.152 ************************************ 00:07:24.152 09:21:48 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:24.152 00:07:24.152 real 0m0.565s 00:07:24.152 user 0m0.396s 00:07:24.152 sys 0m0.152s 00:07:24.152 09:21:48 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:24.152 09:21:48 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:07:24.152 09:21:48 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:07:24.152 09:21:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:24.152 09:21:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:24.152 09:21:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:24.152 ************************************ 00:07:24.152 START TEST dd_invalid_input_count 00:07:24.152 ************************************ 00:07:24.152 09:21:48 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1125 -- # invalid_input_count 00:07:24.152 09:21:48 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:24.152 09:21:48 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:24.152 09:21:48 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:07:24.152 09:21:48 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:24.152 09:21:48 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:24.152 09:21:48 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:07:24.152 09:21:48 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:07:24.152 09:21:48 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@650 -- # local es=0 00:07:24.152 09:21:48 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:07:24.152 09:21:48 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:24.152 09:21:48 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:07:24.152 09:21:48 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:07:24.152 09:21:48 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:07:24.152 09:21:48 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.152 09:21:48 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:24.152 09:21:48 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.152 09:21:48 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:24.152 09:21:48 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.152 09:21:48 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:24.152 09:21:48 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:24.152 09:21:48 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:07:24.412 [2024-10-16 09:21:48.562349] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:07:24.412 [2024-10-16 09:21:48.562433] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62067 ] 00:07:24.412 { 00:07:24.412 "subsystems": [ 00:07:24.412 { 00:07:24.412 "subsystem": "bdev", 00:07:24.412 "config": [ 00:07:24.412 { 00:07:24.412 "params": { 00:07:24.412 "block_size": 512, 00:07:24.412 "num_blocks": 512, 00:07:24.412 "name": "malloc0" 00:07:24.412 }, 00:07:24.412 "method": "bdev_malloc_create" 00:07:24.412 }, 00:07:24.412 { 00:07:24.412 "params": { 00:07:24.412 "block_size": 512, 00:07:24.412 "num_blocks": 512, 00:07:24.412 "name": "malloc1" 00:07:24.412 }, 00:07:24.412 "method": "bdev_malloc_create" 00:07:24.412 }, 00:07:24.412 { 00:07:24.412 "method": "bdev_wait_for_examine" 00:07:24.412 } 00:07:24.412 ] 00:07:24.412 } 00:07:24.412 ] 00:07:24.412 } 00:07:24.412 [2024-10-16 09:21:48.692798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.412 [2024-10-16 09:21:48.732333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.412 [2024-10-16 09:21:48.785149] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:24.672 [2024-10-16 09:21:48.844328] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:07:24.672 [2024-10-16 09:21:48.844398] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:24.672 [2024-10-16 09:21:48.958048] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:24.672 09:21:49 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # es=228 00:07:24.672 09:21:49 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:24.672 09:21:49 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@662 -- # es=100 00:07:24.672 09:21:49 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # case "$es" in 00:07:24.672 09:21:49 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@670 -- # es=1 00:07:24.672 09:21:49 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:24.672 00:07:24.672 real 0m0.510s 00:07:24.672 user 0m0.319s 00:07:24.672 sys 0m0.150s 00:07:24.672 09:21:49 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:24.672 09:21:49 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:07:24.672 ************************************ 00:07:24.672 END TEST dd_invalid_input_count 00:07:24.672 ************************************ 00:07:24.672 09:21:49 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:07:24.672 09:21:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:24.672 09:21:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:24.672 09:21:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:24.672 ************************************ 00:07:24.672 START TEST dd_invalid_output_count 00:07:24.672 ************************************ 00:07:24.672 09:21:49 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1125 -- # invalid_output_count 00:07:24.672 09:21:49 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:24.672 09:21:49 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:24.672 09:21:49 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:07:24.672 09:21:49 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:07:24.672 09:21:49 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@650 -- # local es=0 00:07:24.672 09:21:49 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:07:24.672 09:21:49 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:07:24.672 09:21:49 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:07:24.672 09:21:49 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:24.672 09:21:49 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:07:24.672 09:21:49 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.672 09:21:49 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:24.672 09:21:49 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.672 09:21:49 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:24.672 09:21:49 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.672 09:21:49 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:24.672 09:21:49 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:24.672 09:21:49 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:07:24.932 { 00:07:24.932 "subsystems": [ 00:07:24.932 { 00:07:24.932 "subsystem": "bdev", 00:07:24.932 "config": [ 00:07:24.932 { 00:07:24.932 "params": { 00:07:24.932 "block_size": 512, 00:07:24.932 "num_blocks": 512, 00:07:24.932 "name": "malloc0" 00:07:24.932 }, 00:07:24.932 "method": "bdev_malloc_create" 00:07:24.932 }, 00:07:24.932 { 00:07:24.932 "method": "bdev_wait_for_examine" 00:07:24.932 } 00:07:24.932 ] 00:07:24.932 } 00:07:24.932 ] 00:07:24.932 } 00:07:24.932 [2024-10-16 09:21:49.127845] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:07:24.932 [2024-10-16 09:21:49.127964] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62099 ] 00:07:24.932 [2024-10-16 09:21:49.264477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.932 [2024-10-16 09:21:49.303907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.191 [2024-10-16 09:21:49.357119] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:25.191 [2024-10-16 09:21:49.407415] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:07:25.191 [2024-10-16 09:21:49.407767] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:25.191 [2024-10-16 09:21:49.524144] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:25.191 09:21:49 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # es=228 00:07:25.191 09:21:49 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:25.191 09:21:49 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@662 -- # es=100 00:07:25.191 09:21:49 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # case "$es" in 00:07:25.191 09:21:49 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@670 -- # es=1 00:07:25.191 ************************************ 00:07:25.191 END TEST dd_invalid_output_count 00:07:25.191 ************************************ 00:07:25.191 09:21:49 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:25.191 00:07:25.191 real 0m0.524s 00:07:25.191 user 0m0.335s 00:07:25.191 sys 0m0.135s 00:07:25.191 09:21:49 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.191 09:21:49 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:07:25.451 09:21:49 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:07:25.451 09:21:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:25.451 09:21:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.451 09:21:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:25.451 ************************************ 00:07:25.451 START TEST dd_bs_not_multiple 00:07:25.451 ************************************ 00:07:25.451 09:21:49 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1125 -- # bs_not_multiple 00:07:25.451 09:21:49 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:25.451 09:21:49 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:25.451 09:21:49 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:07:25.451 09:21:49 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:25.451 09:21:49 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:25.451 09:21:49 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:07:25.451 09:21:49 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:07:25.451 09:21:49 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@650 -- # local es=0 00:07:25.451 09:21:49 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:07:25.451 09:21:49 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:07:25.451 09:21:49 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.451 09:21:49 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:07:25.451 09:21:49 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:07:25.451 09:21:49 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.451 09:21:49 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.451 09:21:49 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.451 09:21:49 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.451 09:21:49 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.451 09:21:49 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.451 09:21:49 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:25.451 09:21:49 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:07:25.451 [2024-10-16 09:21:49.707737] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:07:25.451 [2024-10-16 09:21:49.707830] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62132 ] 00:07:25.451 { 00:07:25.451 "subsystems": [ 00:07:25.451 { 00:07:25.451 "subsystem": "bdev", 00:07:25.451 "config": [ 00:07:25.451 { 00:07:25.451 "params": { 00:07:25.451 "block_size": 512, 00:07:25.451 "num_blocks": 512, 00:07:25.451 "name": "malloc0" 00:07:25.451 }, 00:07:25.451 "method": "bdev_malloc_create" 00:07:25.451 }, 00:07:25.451 { 00:07:25.451 "params": { 00:07:25.451 "block_size": 512, 00:07:25.451 "num_blocks": 512, 00:07:25.451 "name": "malloc1" 00:07:25.451 }, 00:07:25.451 "method": "bdev_malloc_create" 00:07:25.451 }, 00:07:25.451 { 00:07:25.451 "method": "bdev_wait_for_examine" 00:07:25.451 } 00:07:25.451 ] 00:07:25.451 } 00:07:25.451 ] 00:07:25.451 } 00:07:25.451 [2024-10-16 09:21:49.845019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.710 [2024-10-16 09:21:49.887153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.710 [2024-10-16 09:21:49.940297] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:25.710 [2024-10-16 09:21:49.998985] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:07:25.710 [2024-10-16 09:21:49.999056] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:25.969 [2024-10-16 09:21:50.118304] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:25.969 09:21:50 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # es=234 00:07:25.969 09:21:50 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:25.969 09:21:50 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@662 -- # es=106 00:07:25.969 09:21:50 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # case "$es" in 00:07:25.969 09:21:50 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@670 -- # es=1 00:07:25.969 09:21:50 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:25.969 ************************************ 00:07:25.969 END TEST dd_bs_not_multiple 00:07:25.969 ************************************ 00:07:25.969 00:07:25.969 real 0m0.536s 00:07:25.969 user 0m0.338s 00:07:25.969 sys 0m0.160s 00:07:25.969 09:21:50 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.969 09:21:50 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:07:25.969 ************************************ 00:07:25.969 END TEST spdk_dd_negative 00:07:25.969 ************************************ 00:07:25.969 00:07:25.969 real 0m6.321s 00:07:25.969 user 0m3.318s 00:07:25.969 sys 0m2.404s 00:07:25.969 09:21:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.969 09:21:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:25.969 ************************************ 00:07:25.969 END TEST spdk_dd 00:07:25.969 ************************************ 00:07:25.969 00:07:25.969 real 1m16.462s 00:07:25.969 user 0m48.429s 00:07:25.969 sys 0m34.464s 00:07:25.969 09:21:50 spdk_dd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.969 09:21:50 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:25.969 09:21:50 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:25.969 09:21:50 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:07:25.969 09:21:50 -- spdk/autotest.sh@256 -- # timing_exit lib 00:07:25.969 09:21:50 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:25.969 09:21:50 -- common/autotest_common.sh@10 -- # set +x 00:07:25.969 09:21:50 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:07:25.969 09:21:50 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:07:25.969 09:21:50 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:07:25.969 09:21:50 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:07:25.969 09:21:50 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:07:25.969 09:21:50 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:07:25.969 09:21:50 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:25.969 09:21:50 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:25.969 09:21:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.969 09:21:50 -- common/autotest_common.sh@10 -- # set +x 00:07:25.969 ************************************ 00:07:25.969 START TEST nvmf_tcp 00:07:25.969 ************************************ 00:07:25.969 09:21:50 nvmf_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:26.228 * Looking for test storage... 00:07:26.228 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:26.228 09:21:50 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:26.228 09:21:50 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:07:26.228 09:21:50 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:26.228 09:21:50 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:26.228 09:21:50 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:26.228 09:21:50 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:26.228 09:21:50 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:26.228 09:21:50 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:26.228 09:21:50 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:26.228 09:21:50 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:26.228 09:21:50 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:26.228 09:21:50 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:26.228 09:21:50 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:26.228 09:21:50 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:26.228 09:21:50 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:26.228 09:21:50 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:26.228 09:21:50 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:26.228 09:21:50 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:26.228 09:21:50 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:26.228 09:21:50 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:26.228 09:21:50 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:26.228 09:21:50 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:26.228 09:21:50 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:26.228 09:21:50 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:26.229 09:21:50 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:26.229 09:21:50 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:26.229 09:21:50 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:26.229 09:21:50 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:26.229 09:21:50 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:26.229 09:21:50 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:26.229 09:21:50 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:26.229 09:21:50 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:26.229 09:21:50 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:26.229 09:21:50 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:26.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.229 --rc genhtml_branch_coverage=1 00:07:26.229 --rc genhtml_function_coverage=1 00:07:26.229 --rc genhtml_legend=1 00:07:26.229 --rc geninfo_all_blocks=1 00:07:26.229 --rc geninfo_unexecuted_blocks=1 00:07:26.229 00:07:26.229 ' 00:07:26.229 09:21:50 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:26.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.229 --rc genhtml_branch_coverage=1 00:07:26.229 --rc genhtml_function_coverage=1 00:07:26.229 --rc genhtml_legend=1 00:07:26.229 --rc geninfo_all_blocks=1 00:07:26.229 --rc geninfo_unexecuted_blocks=1 00:07:26.229 00:07:26.229 ' 00:07:26.229 09:21:50 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:26.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.229 --rc genhtml_branch_coverage=1 00:07:26.229 --rc genhtml_function_coverage=1 00:07:26.229 --rc genhtml_legend=1 00:07:26.229 --rc geninfo_all_blocks=1 00:07:26.229 --rc geninfo_unexecuted_blocks=1 00:07:26.229 00:07:26.229 ' 00:07:26.229 09:21:50 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:26.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.229 --rc genhtml_branch_coverage=1 00:07:26.229 --rc genhtml_function_coverage=1 00:07:26.229 --rc genhtml_legend=1 00:07:26.229 --rc geninfo_all_blocks=1 00:07:26.229 --rc geninfo_unexecuted_blocks=1 00:07:26.229 00:07:26.229 ' 00:07:26.229 09:21:50 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:26.229 09:21:50 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:26.229 09:21:50 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:26.229 09:21:50 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:26.229 09:21:50 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.229 09:21:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:26.229 ************************************ 00:07:26.229 START TEST nvmf_target_core 00:07:26.229 ************************************ 00:07:26.229 09:21:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:26.229 * Looking for test storage... 00:07:26.488 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:26.488 09:21:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:26.488 09:21:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:07:26.488 09:21:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:26.488 09:21:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:26.488 09:21:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:26.488 09:21:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:26.488 09:21:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:26.488 09:21:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:26.488 09:21:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:26.488 09:21:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:26.488 09:21:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:26.488 09:21:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:26.488 09:21:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:26.488 09:21:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:26.488 09:21:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:26.488 09:21:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:26.488 09:21:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:26.488 09:21:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:26.488 09:21:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:26.488 09:21:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:26.488 09:21:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:26.488 09:21:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:26.488 09:21:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:26.488 09:21:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:26.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.489 --rc genhtml_branch_coverage=1 00:07:26.489 --rc genhtml_function_coverage=1 00:07:26.489 --rc genhtml_legend=1 00:07:26.489 --rc geninfo_all_blocks=1 00:07:26.489 --rc geninfo_unexecuted_blocks=1 00:07:26.489 00:07:26.489 ' 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:26.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.489 --rc genhtml_branch_coverage=1 00:07:26.489 --rc genhtml_function_coverage=1 00:07:26.489 --rc genhtml_legend=1 00:07:26.489 --rc geninfo_all_blocks=1 00:07:26.489 --rc geninfo_unexecuted_blocks=1 00:07:26.489 00:07:26.489 ' 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:26.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.489 --rc genhtml_branch_coverage=1 00:07:26.489 --rc genhtml_function_coverage=1 00:07:26.489 --rc genhtml_legend=1 00:07:26.489 --rc geninfo_all_blocks=1 00:07:26.489 --rc geninfo_unexecuted_blocks=1 00:07:26.489 00:07:26.489 ' 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:26.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.489 --rc genhtml_branch_coverage=1 00:07:26.489 --rc genhtml_function_coverage=1 00:07:26.489 --rc genhtml_legend=1 00:07:26.489 --rc geninfo_all_blocks=1 00:07:26.489 --rc geninfo_unexecuted_blocks=1 00:07:26.489 00:07:26.489 ' 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5989d9e2-d339-420e-a2f4-bd87604f111f 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:26.489 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:26.489 ************************************ 00:07:26.489 START TEST nvmf_host_management 00:07:26.489 ************************************ 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:26.489 * Looking for test storage... 00:07:26.489 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:07:26.489 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:26.749 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:26.749 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:26.749 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:26.749 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:26.749 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:26.749 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:26.749 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:26.749 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:26.749 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:26.749 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:26.749 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:26.749 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:26.749 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:26.749 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:26.749 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:26.749 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:26.749 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:26.749 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:26.749 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:26.749 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:26.749 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:26.749 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:26.749 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:26.749 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:26.749 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:26.749 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:26.749 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:26.749 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:26.749 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:26.749 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:26.750 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:26.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.750 --rc genhtml_branch_coverage=1 00:07:26.750 --rc genhtml_function_coverage=1 00:07:26.750 --rc genhtml_legend=1 00:07:26.750 --rc geninfo_all_blocks=1 00:07:26.750 --rc geninfo_unexecuted_blocks=1 00:07:26.750 00:07:26.750 ' 00:07:26.750 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:26.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.750 --rc genhtml_branch_coverage=1 00:07:26.750 --rc genhtml_function_coverage=1 00:07:26.750 --rc genhtml_legend=1 00:07:26.750 --rc geninfo_all_blocks=1 00:07:26.750 --rc geninfo_unexecuted_blocks=1 00:07:26.750 00:07:26.750 ' 00:07:26.750 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:26.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.750 --rc genhtml_branch_coverage=1 00:07:26.750 --rc genhtml_function_coverage=1 00:07:26.750 --rc genhtml_legend=1 00:07:26.750 --rc geninfo_all_blocks=1 00:07:26.750 --rc geninfo_unexecuted_blocks=1 00:07:26.750 00:07:26.750 ' 00:07:26.750 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:26.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.750 --rc genhtml_branch_coverage=1 00:07:26.750 --rc genhtml_function_coverage=1 00:07:26.750 --rc genhtml_legend=1 00:07:26.750 --rc geninfo_all_blocks=1 00:07:26.750 --rc geninfo_unexecuted_blocks=1 00:07:26.750 00:07:26.750 ' 00:07:26.750 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:26.750 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:26.750 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:26.750 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:26.750 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:26.750 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:26.750 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:26.750 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:26.750 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:26.750 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:26.750 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:26.750 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:26.750 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:07:26.750 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5989d9e2-d339-420e-a2f4-bd87604f111f 00:07:26.750 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:26.750 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:26.750 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:26.750 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:26.750 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:26.750 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:26.750 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:26.750 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:26.750 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:26.750 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.750 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.750 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.750 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:26.750 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.750 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:26.750 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:26.750 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:26.750 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:26.750 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:26.750 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:26.750 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:26.750 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:26.750 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:26.750 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:26.750 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:26.750 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:26.750 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:26.750 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:26.750 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:26.750 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:26.750 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:26.750 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:26.750 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:26.750 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:26.750 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:26.750 09:21:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:26.750 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:07:26.750 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:07:26.750 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:07:26.750 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:07:26.750 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:07:26.750 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@458 -- # nvmf_veth_init 00:07:26.750 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:26.750 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:26.750 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:26.750 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:26.750 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:26.750 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:26.750 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:26.750 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:26.750 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:26.750 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:26.750 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:26.750 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:26.750 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:26.750 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:26.750 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:26.750 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:26.750 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:26.750 Cannot find device "nvmf_init_br" 00:07:26.750 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:07:26.750 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:26.750 Cannot find device "nvmf_init_br2" 00:07:26.750 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:07:26.750 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:26.750 Cannot find device "nvmf_tgt_br" 00:07:26.751 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:07:26.751 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:26.751 Cannot find device "nvmf_tgt_br2" 00:07:26.751 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:07:26.751 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:26.751 Cannot find device "nvmf_init_br" 00:07:26.751 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:07:26.751 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:26.751 Cannot find device "nvmf_init_br2" 00:07:26.751 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:07:26.751 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:26.751 Cannot find device "nvmf_tgt_br" 00:07:26.751 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:07:26.751 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:26.751 Cannot find device "nvmf_tgt_br2" 00:07:26.751 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:07:26.751 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:26.751 Cannot find device "nvmf_br" 00:07:26.751 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:07:26.751 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:26.751 Cannot find device "nvmf_init_if" 00:07:26.751 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:07:26.751 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:26.751 Cannot find device "nvmf_init_if2" 00:07:26.751 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:07:26.751 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:26.751 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:26.751 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:07:26.751 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:26.751 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:26.751 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:07:26.751 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:27.009 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:27.009 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:27.009 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:27.009 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:27.009 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:27.009 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:27.009 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:27.009 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:27.009 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:27.009 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:27.009 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:27.009 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:27.009 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:27.009 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:27.009 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:27.009 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:27.009 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:27.009 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:27.009 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:27.009 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:27.009 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:27.009 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:27.009 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:27.009 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:27.009 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:27.268 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:27.268 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:27.268 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:27.268 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:27.268 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:27.268 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:27.268 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:27.268 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:27.268 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:07:27.268 00:07:27.268 --- 10.0.0.3 ping statistics --- 00:07:27.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:27.268 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:07:27.268 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:27.268 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:27.268 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:07:27.268 00:07:27.268 --- 10.0.0.4 ping statistics --- 00:07:27.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:27.268 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:07:27.268 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:27.268 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:27.268 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:07:27.268 00:07:27.268 --- 10.0.0.1 ping statistics --- 00:07:27.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:27.268 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:07:27.268 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:27.268 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:27.268 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:07:27.268 00:07:27.268 --- 10.0.0.2 ping statistics --- 00:07:27.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:27.269 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:07:27.269 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:27.269 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # return 0 00:07:27.269 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:27.269 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:27.269 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:27.269 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:27.269 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:27.269 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:27.269 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:27.269 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:27.269 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:27.269 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:27.269 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:27.269 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:27.269 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:27.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.269 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=62474 00:07:27.269 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 62474 00:07:27.269 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 62474 ']' 00:07:27.269 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.269 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:27.269 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:27.269 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.269 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:27.269 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:27.269 [2024-10-16 09:21:51.571113] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:07:27.269 [2024-10-16 09:21:51.571397] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:27.528 [2024-10-16 09:21:51.712837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:27.528 [2024-10-16 09:21:51.774590] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:27.528 [2024-10-16 09:21:51.774697] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:27.528 [2024-10-16 09:21:51.774707] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:27.528 [2024-10-16 09:21:51.774715] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:27.528 [2024-10-16 09:21:51.774722] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:27.528 [2024-10-16 09:21:51.775915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:27.528 [2024-10-16 09:21:51.776970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:27.528 [2024-10-16 09:21:51.777163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:27.528 [2024-10-16 09:21:51.777171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.528 [2024-10-16 09:21:51.833941] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:27.528 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:27.528 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:27.528 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:27.528 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:27.528 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:27.787 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:27.787 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:27.787 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.787 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:27.787 [2024-10-16 09:21:51.947411] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:27.787 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.787 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:27.787 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:27.787 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:27.787 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:27.787 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:27.787 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:27.787 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.787 09:21:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:27.787 Malloc0 00:07:27.787 [2024-10-16 09:21:52.018324] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:27.787 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.787 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:27.787 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:27.787 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:27.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:27.787 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=62515 00:07:27.787 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 62515 /var/tmp/bdevperf.sock 00:07:27.787 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 62515 ']' 00:07:27.787 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:27.787 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:27.787 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:27.787 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:27.787 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:27.787 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:27.787 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:27.787 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:07:27.787 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:07:27.787 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:07:27.787 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:07:27.787 { 00:07:27.787 "params": { 00:07:27.787 "name": "Nvme$subsystem", 00:07:27.787 "trtype": "$TEST_TRANSPORT", 00:07:27.787 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:27.787 "adrfam": "ipv4", 00:07:27.787 "trsvcid": "$NVMF_PORT", 00:07:27.787 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:27.787 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:27.787 "hdgst": ${hdgst:-false}, 00:07:27.787 "ddgst": ${ddgst:-false} 00:07:27.787 }, 00:07:27.787 "method": "bdev_nvme_attach_controller" 00:07:27.787 } 00:07:27.787 EOF 00:07:27.787 )") 00:07:27.787 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:07:27.787 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:07:27.787 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:07:27.787 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:07:27.787 "params": { 00:07:27.787 "name": "Nvme0", 00:07:27.787 "trtype": "tcp", 00:07:27.787 "traddr": "10.0.0.3", 00:07:27.787 "adrfam": "ipv4", 00:07:27.787 "trsvcid": "4420", 00:07:27.787 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:27.787 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:27.787 "hdgst": false, 00:07:27.787 "ddgst": false 00:07:27.787 }, 00:07:27.787 "method": "bdev_nvme_attach_controller" 00:07:27.787 }' 00:07:27.787 [2024-10-16 09:21:52.123883] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:07:27.787 [2024-10-16 09:21:52.124592] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62515 ] 00:07:28.046 [2024-10-16 09:21:52.259712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.046 [2024-10-16 09:21:52.325697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.046 [2024-10-16 09:21:52.394024] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:28.305 Running I/O for 10 seconds... 00:07:28.305 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:28.305 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:28.305 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:28.305 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.305 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:28.305 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.305 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:28.305 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:28.305 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:28.305 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:28.305 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:28.305 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:28.305 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:28.305 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:28.305 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:28.305 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:28.305 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.305 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:28.305 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.305 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:07:28.305 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:07:28.305 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:28.566 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:28.566 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:28.566 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:28.566 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:28.566 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.566 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:28.566 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.566 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:07:28.566 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:07:28.566 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:28.566 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:28.566 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:28.566 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:28.566 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.566 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:28.566 [2024-10-16 09:21:52.967139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.566 [2024-10-16 09:21:52.967186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.566 [2024-10-16 09:21:52.967212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.566 [2024-10-16 09:21:52.967223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.566 [2024-10-16 09:21:52.967236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.566 [2024-10-16 09:21:52.967246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.566 [2024-10-16 09:21:52.967257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.566 [2024-10-16 09:21:52.967267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.566 [2024-10-16 09:21:52.967279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.566 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.566 [2024-10-16 09:21:52.967289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.566 [2024-10-16 09:21:52.967301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.566 [2024-10-16 09:21:52.967311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.566 [2024-10-16 09:21:52.967323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.566 [2024-10-16 09:21:52.967332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.566 [2024-10-16 09:21:52.967344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.566 [2024-10-16 09:21:52.967355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.566 [2024-10-16 09:21:52.967366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.566 [2024-10-16 09:21:52.967376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.566 [2024-10-16 09:21:52.967388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.566 [2024-10-16 09:21:52.967398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.566 [2024-10-16 09:21:52.967410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.566 [2024-10-16 09:21:52.967420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.566 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:28.566 [2024-10-16 09:21:52.967431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.566 [2024-10-16 09:21:52.967441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.566 [2024-10-16 09:21:52.967458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.566 [2024-10-16 09:21:52.967469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.566 [2024-10-16 09:21:52.967480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.566 [2024-10-16 09:21:52.967494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.566 [2024-10-16 09:21:52.967506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.566 [2024-10-16 09:21:52.967516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.566 [2024-10-16 09:21:52.967528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.566 [2024-10-16 09:21:52.967566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.566 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.566 [2024-10-16 09:21:52.967582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.566 [2024-10-16 09:21:52.967592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.566 [2024-10-16 09:21:52.967604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.566 [2024-10-16 09:21:52.967614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.566 [2024-10-16 09:21:52.967626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.566 [2024-10-16 09:21:52.967636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.566 [2024-10-16 09:21:52.967647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.566 [2024-10-16 09:21:52.967657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.566 [2024-10-16 09:21:52.967669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.566 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:28.566 [2024-10-16 09:21:52.967693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.566 [2024-10-16 09:21:52.967705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.566 [2024-10-16 09:21:52.967714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.566 [2024-10-16 09:21:52.967726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.566 [2024-10-16 09:21:52.967735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.566 [2024-10-16 09:21:52.967747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.567 [2024-10-16 09:21:52.967756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.567 [2024-10-16 09:21:52.967768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.567 [2024-10-16 09:21:52.967777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.567 [2024-10-16 09:21:52.967788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.567 [2024-10-16 09:21:52.967798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.567 [2024-10-16 09:21:52.967809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.567 [2024-10-16 09:21:52.967819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.567 [2024-10-16 09:21:52.967830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.567 [2024-10-16 09:21:52.967839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.567 [2024-10-16 09:21:52.967851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.567 [2024-10-16 09:21:52.967860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.567 [2024-10-16 09:21:52.967871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.567 [2024-10-16 09:21:52.967886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.567 [2024-10-16 09:21:52.967905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.567 [2024-10-16 09:21:52.967915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.567 [2024-10-16 09:21:52.967926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.567 [2024-10-16 09:21:52.967936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.567 [2024-10-16 09:21:52.967947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.567 [2024-10-16 09:21:52.967957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.567 [2024-10-16 09:21:52.967968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.567 [2024-10-16 09:21:52.967978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.567 [2024-10-16 09:21:52.967990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.567 [2024-10-16 09:21:52.968000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.567 [2024-10-16 09:21:52.968011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.567 [2024-10-16 09:21:52.968021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.567 [2024-10-16 09:21:52.968034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.567 [2024-10-16 09:21:52.968052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.567 [2024-10-16 09:21:52.968064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.567 [2024-10-16 09:21:52.968078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.567 [2024-10-16 09:21:52.968090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.567 [2024-10-16 09:21:52.968100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.567 [2024-10-16 09:21:52.968111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.567 [2024-10-16 09:21:52.968121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.567 [2024-10-16 09:21:52.968132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.567 [2024-10-16 09:21:52.968141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.567 [2024-10-16 09:21:52.968153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.567 [2024-10-16 09:21:52.968163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.567 [2024-10-16 09:21:52.968174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.567 [2024-10-16 09:21:52.968184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.567 [2024-10-16 09:21:52.968195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.567 [2024-10-16 09:21:52.968204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.567 [2024-10-16 09:21:52.968216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.567 [2024-10-16 09:21:52.968225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.567 [2024-10-16 09:21:52.968236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.567 [2024-10-16 09:21:52.968251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.567 [2024-10-16 09:21:52.968263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.567 [2024-10-16 09:21:52.968273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.567 [2024-10-16 09:21:52.968284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.567 [2024-10-16 09:21:52.968293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.567 [2024-10-16 09:21:52.968305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.567 [2024-10-16 09:21:52.968314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.567 [2024-10-16 09:21:52.968326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.567 [2024-10-16 09:21:52.968337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.567 [2024-10-16 09:21:52.968348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.567 [2024-10-16 09:21:52.968358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.567 [2024-10-16 09:21:52.968369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.567 [2024-10-16 09:21:52.968379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.567 [2024-10-16 09:21:52.968391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.567 [2024-10-16 09:21:52.968400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.567 [2024-10-16 09:21:52.968422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.567 [2024-10-16 09:21:52.968432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.567 [2024-10-16 09:21:52.968444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.567 [2024-10-16 09:21:52.968454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.567 [2024-10-16 09:21:52.968465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.567 [2024-10-16 09:21:52.968474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.567 [2024-10-16 09:21:52.968486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.567 [2024-10-16 09:21:52.968495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.567 [2024-10-16 09:21:52.968507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.567 [2024-10-16 09:21:52.968517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.567 [2024-10-16 09:21:52.968528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.567 [2024-10-16 09:21:52.968547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.567 [2024-10-16 09:21:52.968561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.567 [2024-10-16 09:21:52.968571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.567 [2024-10-16 09:21:52.968582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.567 [2024-10-16 09:21:52.968592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.567 [2024-10-16 09:21:52.968604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.567 [2024-10-16 09:21:52.968618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.567 [2024-10-16 09:21:52.968630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.567 [2024-10-16 09:21:52.968640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.567 [2024-10-16 09:21:52.968651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:28.567 [2024-10-16 09:21:52.968661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.567 [2024-10-16 09:21:52.968671] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fe7c0 is same with the state(6) to be set 00:07:28.841 [2024-10-16 09:21:52.969716] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x15fe7c0 was disconnected and freed. reset controller. 00:07:28.841 [2024-10-16 09:21:52.969855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:28.841 [2024-10-16 09:21:52.969874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.841 [2024-10-16 09:21:52.969885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:28.841 [2024-10-16 09:21:52.969895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.841 [2024-10-16 09:21:52.969905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:28.841 [2024-10-16 09:21:52.969914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.841 [2024-10-16 09:21:52.969924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:28.841 [2024-10-16 09:21:52.969934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:28.841 [2024-10-16 09:21:52.969943] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15feb20 is same with the state(6) to be set 00:07:28.841 [2024-10-16 09:21:52.971034] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:28.841 task offset: 81920 on job bdev=Nvme0n1 fails 00:07:28.841 00:07:28.841 Latency(us) 00:07:28.841 [2024-10-16T09:21:53.245Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:28.841 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:28.841 Job: Nvme0n1 ended in about 0.44 seconds with error 00:07:28.841 Verification LBA range: start 0x0 length 0x400 00:07:28.841 Nvme0n1 : 0.44 1442.02 90.13 144.20 0.00 38976.17 3261.91 39559.91 00:07:28.841 [2024-10-16T09:21:53.245Z] =================================================================================================================== 00:07:28.841 [2024-10-16T09:21:53.245Z] Total : 1442.02 90.13 144.20 0.00 38976.17 3261.91 39559.91 00:07:28.841 [2024-10-16 09:21:52.973213] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:28.841 [2024-10-16 09:21:52.973251] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15feb20 (9): Bad file descriptor 00:07:28.841 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.841 09:21:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:28.841 [2024-10-16 09:21:52.980587] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:29.794 09:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 62515 00:07:29.794 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (62515) - No such process 00:07:29.794 09:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:29.794 09:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:29.794 09:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:29.794 09:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:29.794 09:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:07:29.794 09:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:07:29.794 09:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:07:29.794 09:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:07:29.794 { 00:07:29.794 "params": { 00:07:29.794 "name": "Nvme$subsystem", 00:07:29.794 "trtype": "$TEST_TRANSPORT", 00:07:29.794 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:29.794 "adrfam": "ipv4", 00:07:29.794 "trsvcid": "$NVMF_PORT", 00:07:29.794 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:29.794 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:29.794 "hdgst": ${hdgst:-false}, 00:07:29.794 "ddgst": ${ddgst:-false} 00:07:29.794 }, 00:07:29.794 "method": "bdev_nvme_attach_controller" 00:07:29.794 } 00:07:29.794 EOF 00:07:29.794 )") 00:07:29.794 09:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:07:29.794 09:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:07:29.794 09:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:07:29.794 09:21:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:07:29.794 "params": { 00:07:29.794 "name": "Nvme0", 00:07:29.794 "trtype": "tcp", 00:07:29.794 "traddr": "10.0.0.3", 00:07:29.794 "adrfam": "ipv4", 00:07:29.794 "trsvcid": "4420", 00:07:29.794 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:29.794 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:29.794 "hdgst": false, 00:07:29.794 "ddgst": false 00:07:29.794 }, 00:07:29.794 "method": "bdev_nvme_attach_controller" 00:07:29.794 }' 00:07:29.794 [2024-10-16 09:21:54.042616] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:07:29.794 [2024-10-16 09:21:54.042740] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62555 ] 00:07:29.794 [2024-10-16 09:21:54.186433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.053 [2024-10-16 09:21:54.236552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.053 [2024-10-16 09:21:54.303799] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:30.053 Running I/O for 1 seconds... 00:07:31.431 1536.00 IOPS, 96.00 MiB/s 00:07:31.431 Latency(us) 00:07:31.431 [2024-10-16T09:21:55.835Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:31.431 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:31.431 Verification LBA range: start 0x0 length 0x400 00:07:31.431 Nvme0n1 : 1.00 1593.89 99.62 0.00 0.00 39394.50 5928.03 39559.91 00:07:31.431 [2024-10-16T09:21:55.835Z] =================================================================================================================== 00:07:31.431 [2024-10-16T09:21:55.835Z] Total : 1593.89 99.62 0.00 0.00 39394.50 5928.03 39559.91 00:07:31.431 09:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:31.431 09:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:31.431 09:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:07:31.431 09:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:31.431 09:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:31.431 09:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:31.431 09:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:31.431 09:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:31.431 09:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:31.431 09:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:31.431 09:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:31.431 rmmod nvme_tcp 00:07:31.432 rmmod nvme_fabrics 00:07:31.432 rmmod nvme_keyring 00:07:31.432 09:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:31.432 09:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:31.432 09:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:31.432 09:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 62474 ']' 00:07:31.432 09:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 62474 00:07:31.432 09:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 62474 ']' 00:07:31.432 09:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 62474 00:07:31.432 09:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:07:31.432 09:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:31.432 09:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62474 00:07:31.691 killing process with pid 62474 00:07:31.691 09:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:31.691 09:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:31.691 09:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62474' 00:07:31.691 09:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 62474 00:07:31.691 09:21:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 62474 00:07:31.950 [2024-10-16 09:21:56.115455] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:31.950 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:31.950 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:31.950 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:31.950 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:31.950 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:07:31.950 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:31.950 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:07:31.950 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:31.950 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:31.950 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:31.950 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:31.950 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:31.950 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:31.950 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:31.950 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:31.950 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:31.950 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:31.951 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:31.951 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:31.951 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:31.951 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:31.951 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:31.951 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:31.951 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:31.951 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:31.951 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:32.210 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:07:32.210 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:32.210 00:07:32.210 real 0m5.599s 00:07:32.210 user 0m19.692s 00:07:32.210 sys 0m1.572s 00:07:32.210 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:32.210 ************************************ 00:07:32.210 END TEST nvmf_host_management 00:07:32.210 ************************************ 00:07:32.210 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:32.210 09:21:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:32.210 09:21:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:32.210 09:21:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:32.211 09:21:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:32.211 ************************************ 00:07:32.211 START TEST nvmf_lvol 00:07:32.211 ************************************ 00:07:32.211 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:32.211 * Looking for test storage... 00:07:32.211 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:32.211 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:32.211 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:07:32.211 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:32.211 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:32.211 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:32.211 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:32.211 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:32.211 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:32.211 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:32.211 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:32.211 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:32.211 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:32.211 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:32.211 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:32.211 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:32.211 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:32.211 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:32.211 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:32.211 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:32.211 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:32.211 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:32.211 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:32.211 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:32.211 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:32.211 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:32.211 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:32.211 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:32.211 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:32.211 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:32.211 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:32.211 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:32.211 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:32.211 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:32.211 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:32.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.211 --rc genhtml_branch_coverage=1 00:07:32.211 --rc genhtml_function_coverage=1 00:07:32.211 --rc genhtml_legend=1 00:07:32.211 --rc geninfo_all_blocks=1 00:07:32.211 --rc geninfo_unexecuted_blocks=1 00:07:32.211 00:07:32.211 ' 00:07:32.211 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:32.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.211 --rc genhtml_branch_coverage=1 00:07:32.211 --rc genhtml_function_coverage=1 00:07:32.211 --rc genhtml_legend=1 00:07:32.211 --rc geninfo_all_blocks=1 00:07:32.211 --rc geninfo_unexecuted_blocks=1 00:07:32.211 00:07:32.211 ' 00:07:32.211 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:32.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.211 --rc genhtml_branch_coverage=1 00:07:32.211 --rc genhtml_function_coverage=1 00:07:32.211 --rc genhtml_legend=1 00:07:32.211 --rc geninfo_all_blocks=1 00:07:32.211 --rc geninfo_unexecuted_blocks=1 00:07:32.211 00:07:32.211 ' 00:07:32.211 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:32.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.211 --rc genhtml_branch_coverage=1 00:07:32.211 --rc genhtml_function_coverage=1 00:07:32.211 --rc genhtml_legend=1 00:07:32.211 --rc geninfo_all_blocks=1 00:07:32.211 --rc geninfo_unexecuted_blocks=1 00:07:32.211 00:07:32.211 ' 00:07:32.211 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:32.471 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:32.471 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:32.471 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:32.471 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:32.471 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:32.471 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:32.471 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:32.471 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:32.471 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:32.471 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:32.471 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:32.471 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:07:32.471 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5989d9e2-d339-420e-a2f4-bd87604f111f 00:07:32.471 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:32.471 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:32.471 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:32.471 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:32.471 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:32.471 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:32.471 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:32.471 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:32.471 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:32.471 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.471 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.471 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.471 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:32.471 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.471 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:32.471 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:32.471 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:32.471 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:32.471 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:32.471 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:32.471 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:32.471 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:32.471 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:32.471 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:32.471 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:32.471 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:32.471 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:32.471 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:32.471 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:32.471 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:32.471 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:32.471 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:32.471 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:32.471 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:32.471 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:32.471 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:32.471 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:32.471 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:32.471 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:32.471 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:07:32.471 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:07:32.471 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:07:32.471 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:07:32.471 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:07:32.471 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@458 -- # nvmf_veth_init 00:07:32.472 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:32.472 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:32.472 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:32.472 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:32.472 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:32.472 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:32.472 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:32.472 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:32.472 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:32.472 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:32.472 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:32.472 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:32.472 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:32.472 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:32.472 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:32.472 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:32.472 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:32.472 Cannot find device "nvmf_init_br" 00:07:32.472 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:07:32.472 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:32.472 Cannot find device "nvmf_init_br2" 00:07:32.472 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:07:32.472 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:32.472 Cannot find device "nvmf_tgt_br" 00:07:32.472 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:07:32.472 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:32.472 Cannot find device "nvmf_tgt_br2" 00:07:32.472 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:07:32.472 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:32.472 Cannot find device "nvmf_init_br" 00:07:32.472 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:07:32.472 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:32.472 Cannot find device "nvmf_init_br2" 00:07:32.472 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:07:32.472 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:32.472 Cannot find device "nvmf_tgt_br" 00:07:32.472 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:07:32.472 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:32.472 Cannot find device "nvmf_tgt_br2" 00:07:32.472 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:07:32.472 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:32.472 Cannot find device "nvmf_br" 00:07:32.472 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:07:32.472 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:32.472 Cannot find device "nvmf_init_if" 00:07:32.472 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:07:32.472 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:32.472 Cannot find device "nvmf_init_if2" 00:07:32.472 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:07:32.472 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:32.472 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:32.472 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:07:32.472 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:32.472 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:32.472 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:07:32.472 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:32.472 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:32.472 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:32.472 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:32.472 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:32.472 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:32.472 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:32.731 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:32.731 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:32.731 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:32.731 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:32.731 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:32.731 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:32.731 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:32.731 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:32.731 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:32.731 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:32.731 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:32.732 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:32.732 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:32.732 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:32.732 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:32.732 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:32.732 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:32.732 09:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:32.732 09:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:32.732 09:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:32.732 09:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:32.732 09:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:32.732 09:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:32.732 09:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:32.732 09:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:32.732 09:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:32.732 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:32.732 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:07:32.732 00:07:32.732 --- 10.0.0.3 ping statistics --- 00:07:32.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.732 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:07:32.732 09:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:32.732 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:32.732 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:07:32.732 00:07:32.732 --- 10.0.0.4 ping statistics --- 00:07:32.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.732 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:07:32.732 09:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:32.732 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:32.732 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:07:32.732 00:07:32.732 --- 10.0.0.1 ping statistics --- 00:07:32.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.732 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:07:32.732 09:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:32.732 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:32.732 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.036 ms 00:07:32.732 00:07:32.732 --- 10.0.0.2 ping statistics --- 00:07:32.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.732 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:07:32.732 09:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:32.732 09:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # return 0 00:07:32.732 09:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:32.732 09:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:32.732 09:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:32.732 09:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:32.732 09:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:32.732 09:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:32.732 09:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:32.732 09:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:32.732 09:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:32.732 09:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:32.732 09:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:32.732 09:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=62824 00:07:32.732 09:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:32.732 09:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 62824 00:07:32.732 09:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 62824 ']' 00:07:32.732 09:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.732 09:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:32.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.732 09:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.732 09:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:32.732 09:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:32.732 [2024-10-16 09:21:57.127803] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:07:32.732 [2024-10-16 09:21:57.127909] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:32.994 [2024-10-16 09:21:57.268365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:32.994 [2024-10-16 09:21:57.330899] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:32.994 [2024-10-16 09:21:57.331201] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:32.994 [2024-10-16 09:21:57.331284] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:32.994 [2024-10-16 09:21:57.331355] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:32.994 [2024-10-16 09:21:57.331435] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:32.994 [2024-10-16 09:21:57.333131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.994 [2024-10-16 09:21:57.333293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:32.994 [2024-10-16 09:21:57.333296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.994 [2024-10-16 09:21:57.396381] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:33.254 09:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:33.254 09:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:07:33.254 09:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:33.254 09:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:33.254 09:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:33.254 09:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:33.254 09:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:33.513 [2024-10-16 09:21:57.779867] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:33.513 09:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:34.080 09:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:34.080 09:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:34.339 09:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:34.339 09:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:34.598 09:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:34.884 09:21:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=b6a57f6e-04e6-4564-b5e0-2e3e310b228e 00:07:34.885 09:21:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b6a57f6e-04e6-4564-b5e0-2e3e310b228e lvol 20 00:07:35.156 09:21:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=99e00396-9847-4b36-a186-bed3e07622f6 00:07:35.156 09:21:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:35.414 09:21:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 99e00396-9847-4b36-a186-bed3e07622f6 00:07:35.673 09:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:35.932 [2024-10-16 09:22:00.289270] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:35.932 09:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:36.191 09:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:36.191 09:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=62892 00:07:36.191 09:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:37.568 09:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 99e00396-9847-4b36-a186-bed3e07622f6 MY_SNAPSHOT 00:07:37.568 09:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=9017dfe3-4132-4bae-9e08-9fada412c32e 00:07:37.568 09:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 99e00396-9847-4b36-a186-bed3e07622f6 30 00:07:37.827 09:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 9017dfe3-4132-4bae-9e08-9fada412c32e MY_CLONE 00:07:38.084 09:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=93d0585a-6b28-486f-b76a-0ad37bb46966 00:07:38.084 09:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 93d0585a-6b28-486f-b76a-0ad37bb46966 00:07:38.651 09:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 62892 00:07:46.805 Initializing NVMe Controllers 00:07:46.805 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:07:46.805 Controller IO queue size 128, less than required. 00:07:46.805 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:46.805 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:46.805 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:46.805 Initialization complete. Launching workers. 00:07:46.805 ======================================================== 00:07:46.805 Latency(us) 00:07:46.805 Device Information : IOPS MiB/s Average min max 00:07:46.805 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 7686.01 30.02 16658.11 3258.40 81681.92 00:07:46.805 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 7177.43 28.04 17851.13 3863.33 88032.25 00:07:46.805 ======================================================== 00:07:46.805 Total : 14863.45 58.06 17234.21 3258.40 88032.25 00:07:46.805 00:07:46.805 09:22:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:46.805 09:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 99e00396-9847-4b36-a186-bed3e07622f6 00:07:47.372 09:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b6a57f6e-04e6-4564-b5e0-2e3e310b228e 00:07:47.372 09:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:47.372 09:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:47.372 09:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:47.372 09:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:47.372 09:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:47.631 09:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:47.631 09:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:47.632 09:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:47.632 09:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:47.632 rmmod nvme_tcp 00:07:47.632 rmmod nvme_fabrics 00:07:47.632 rmmod nvme_keyring 00:07:47.632 09:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:47.632 09:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:47.632 09:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:47.632 09:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 62824 ']' 00:07:47.632 09:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 62824 00:07:47.632 09:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 62824 ']' 00:07:47.632 09:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 62824 00:07:47.632 09:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:07:47.632 09:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:47.632 09:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62824 00:07:47.632 killing process with pid 62824 00:07:47.632 09:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:47.632 09:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:47.632 09:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62824' 00:07:47.632 09:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 62824 00:07:47.632 09:22:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 62824 00:07:47.890 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:47.890 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:47.890 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:47.890 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:47.890 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:07:47.890 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:47.890 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:07:47.890 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:47.890 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:47.890 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:47.890 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:47.890 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:47.890 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:47.890 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:47.890 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:47.890 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:47.890 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:47.890 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:48.148 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:48.148 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:48.148 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:48.148 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:48.148 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:48.148 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.148 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:48.148 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:48.148 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:07:48.148 00:07:48.148 real 0m16.003s 00:07:48.148 user 1m5.747s 00:07:48.148 sys 0m4.124s 00:07:48.148 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:48.148 ************************************ 00:07:48.148 END TEST nvmf_lvol 00:07:48.148 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:48.148 ************************************ 00:07:48.148 09:22:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:48.148 09:22:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:48.148 09:22:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:48.148 09:22:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:48.148 ************************************ 00:07:48.148 START TEST nvmf_lvs_grow 00:07:48.148 ************************************ 00:07:48.148 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:48.407 * Looking for test storage... 00:07:48.407 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:48.407 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:48.407 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:07:48.407 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:48.407 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:48.407 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:48.407 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:48.407 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:48.407 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:48.407 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:48.407 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:48.407 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:48.407 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:48.407 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:48.407 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:48.407 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:48.407 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:48.407 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:48.407 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:48.407 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:48.407 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:48.407 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:48.407 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:48.407 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:48.407 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:48.407 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:48.407 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:48.407 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:48.407 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:48.407 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:48.407 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:48.407 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:48.407 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:48.407 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:48.407 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:48.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.407 --rc genhtml_branch_coverage=1 00:07:48.407 --rc genhtml_function_coverage=1 00:07:48.407 --rc genhtml_legend=1 00:07:48.407 --rc geninfo_all_blocks=1 00:07:48.407 --rc geninfo_unexecuted_blocks=1 00:07:48.407 00:07:48.407 ' 00:07:48.407 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:48.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.407 --rc genhtml_branch_coverage=1 00:07:48.407 --rc genhtml_function_coverage=1 00:07:48.407 --rc genhtml_legend=1 00:07:48.407 --rc geninfo_all_blocks=1 00:07:48.407 --rc geninfo_unexecuted_blocks=1 00:07:48.407 00:07:48.407 ' 00:07:48.407 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:48.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.407 --rc genhtml_branch_coverage=1 00:07:48.407 --rc genhtml_function_coverage=1 00:07:48.407 --rc genhtml_legend=1 00:07:48.407 --rc geninfo_all_blocks=1 00:07:48.407 --rc geninfo_unexecuted_blocks=1 00:07:48.407 00:07:48.407 ' 00:07:48.407 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:48.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.407 --rc genhtml_branch_coverage=1 00:07:48.407 --rc genhtml_function_coverage=1 00:07:48.407 --rc genhtml_legend=1 00:07:48.407 --rc geninfo_all_blocks=1 00:07:48.407 --rc geninfo_unexecuted_blocks=1 00:07:48.407 00:07:48.407 ' 00:07:48.407 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:48.407 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:48.407 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5989d9e2-d339-420e-a2f4-bd87604f111f 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:48.408 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@458 -- # nvmf_veth_init 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:48.408 Cannot find device "nvmf_init_br" 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:48.408 Cannot find device "nvmf_init_br2" 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:48.408 Cannot find device "nvmf_tgt_br" 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:48.408 Cannot find device "nvmf_tgt_br2" 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:48.408 Cannot find device "nvmf_init_br" 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:48.408 Cannot find device "nvmf_init_br2" 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:48.408 Cannot find device "nvmf_tgt_br" 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:48.408 Cannot find device "nvmf_tgt_br2" 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:48.408 Cannot find device "nvmf_br" 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:07:48.408 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:48.668 Cannot find device "nvmf_init_if" 00:07:48.668 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:07:48.668 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:48.668 Cannot find device "nvmf_init_if2" 00:07:48.668 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:07:48.668 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:48.668 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:48.668 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:07:48.668 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:48.668 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:48.668 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:07:48.668 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:48.668 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:48.668 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:48.668 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:48.668 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:48.668 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:48.668 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:48.668 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:48.668 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:48.668 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:48.668 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:48.668 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:48.668 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:48.668 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:48.668 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:48.668 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:48.668 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:48.668 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:48.668 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:48.668 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:48.668 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:48.668 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:48.668 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:48.668 09:22:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:48.668 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:48.668 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:48.668 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:48.668 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:48.668 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:48.668 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:48.668 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:48.668 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:48.668 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:48.668 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:48.668 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.101 ms 00:07:48.668 00:07:48.668 --- 10.0.0.3 ping statistics --- 00:07:48.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.668 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:07:48.668 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:48.668 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:48.668 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:07:48.668 00:07:48.668 --- 10.0.0.4 ping statistics --- 00:07:48.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.668 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:07:48.668 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:48.668 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:48.668 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:07:48.668 00:07:48.668 --- 10.0.0.1 ping statistics --- 00:07:48.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.668 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:07:48.668 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:48.668 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:48.668 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:07:48.668 00:07:48.668 --- 10.0.0.2 ping statistics --- 00:07:48.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.668 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:07:48.668 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:48.668 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # return 0 00:07:48.668 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:48.668 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:48.668 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:48.668 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:48.668 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:48.668 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:48.668 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:48.927 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:48.927 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:48.927 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:48.927 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:48.927 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=63275 00:07:48.927 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:48.927 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 63275 00:07:48.927 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 63275 ']' 00:07:48.927 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.927 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:48.927 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.927 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:48.927 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:48.927 [2024-10-16 09:22:13.144065] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:07:48.927 [2024-10-16 09:22:13.144137] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:48.927 [2024-10-16 09:22:13.282901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.186 [2024-10-16 09:22:13.338191] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:49.186 [2024-10-16 09:22:13.338257] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:49.186 [2024-10-16 09:22:13.338270] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:49.186 [2024-10-16 09:22:13.338280] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:49.186 [2024-10-16 09:22:13.338289] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:49.186 [2024-10-16 09:22:13.338760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.186 [2024-10-16 09:22:13.396145] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:49.186 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:49.186 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:07:49.186 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:49.186 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:49.186 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:49.186 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:49.186 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:49.444 [2024-10-16 09:22:13.794283] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:49.444 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:49.444 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:49.444 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:49.444 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:49.444 ************************************ 00:07:49.444 START TEST lvs_grow_clean 00:07:49.444 ************************************ 00:07:49.444 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:07:49.444 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:49.444 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:49.444 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:49.444 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:49.445 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:49.445 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:49.445 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:49.445 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:49.445 09:22:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:49.703 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:49.703 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:49.961 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=2704bd39-56bc-45e7-bb9e-ae8d75c67eea 00:07:49.961 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2704bd39-56bc-45e7-bb9e-ae8d75c67eea 00:07:49.961 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:50.220 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:50.220 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:50.220 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2704bd39-56bc-45e7-bb9e-ae8d75c67eea lvol 150 00:07:50.787 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=34efcfa8-c69d-4e60-9d2a-3334e90519a3 00:07:50.787 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:50.787 09:22:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:50.787 [2024-10-16 09:22:15.115486] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:50.787 [2024-10-16 09:22:15.115611] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:50.787 true 00:07:50.787 09:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:50.787 09:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2704bd39-56bc-45e7-bb9e-ae8d75c67eea 00:07:51.046 09:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:51.046 09:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:51.305 09:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 34efcfa8-c69d-4e60-9d2a-3334e90519a3 00:07:51.564 09:22:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:51.822 [2024-10-16 09:22:16.128070] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:51.822 09:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:52.081 09:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63350 00:07:52.081 09:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:52.081 09:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:52.081 09:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63350 /var/tmp/bdevperf.sock 00:07:52.081 09:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 63350 ']' 00:07:52.081 09:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:52.081 09:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:52.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:52.081 09:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:52.081 09:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:52.081 09:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:52.081 [2024-10-16 09:22:16.474927] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:07:52.081 [2024-10-16 09:22:16.475010] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63350 ] 00:07:52.340 [2024-10-16 09:22:16.607796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.340 [2024-10-16 09:22:16.666946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.340 [2024-10-16 09:22:16.722435] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:52.598 09:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:52.598 09:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:07:52.598 09:22:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:52.857 Nvme0n1 00:07:52.857 09:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:53.115 [ 00:07:53.115 { 00:07:53.115 "name": "Nvme0n1", 00:07:53.115 "aliases": [ 00:07:53.115 "34efcfa8-c69d-4e60-9d2a-3334e90519a3" 00:07:53.115 ], 00:07:53.116 "product_name": "NVMe disk", 00:07:53.116 "block_size": 4096, 00:07:53.116 "num_blocks": 38912, 00:07:53.116 "uuid": "34efcfa8-c69d-4e60-9d2a-3334e90519a3", 00:07:53.116 "numa_id": -1, 00:07:53.116 "assigned_rate_limits": { 00:07:53.116 "rw_ios_per_sec": 0, 00:07:53.116 "rw_mbytes_per_sec": 0, 00:07:53.116 "r_mbytes_per_sec": 0, 00:07:53.116 "w_mbytes_per_sec": 0 00:07:53.116 }, 00:07:53.116 "claimed": false, 00:07:53.116 "zoned": false, 00:07:53.116 "supported_io_types": { 00:07:53.116 "read": true, 00:07:53.116 "write": true, 00:07:53.116 "unmap": true, 00:07:53.116 "flush": true, 00:07:53.116 "reset": true, 00:07:53.116 "nvme_admin": true, 00:07:53.116 "nvme_io": true, 00:07:53.116 "nvme_io_md": false, 00:07:53.116 "write_zeroes": true, 00:07:53.116 "zcopy": false, 00:07:53.116 "get_zone_info": false, 00:07:53.116 "zone_management": false, 00:07:53.116 "zone_append": false, 00:07:53.116 "compare": true, 00:07:53.116 "compare_and_write": true, 00:07:53.116 "abort": true, 00:07:53.116 "seek_hole": false, 00:07:53.116 "seek_data": false, 00:07:53.116 "copy": true, 00:07:53.116 "nvme_iov_md": false 00:07:53.116 }, 00:07:53.116 "memory_domains": [ 00:07:53.116 { 00:07:53.116 "dma_device_id": "system", 00:07:53.116 "dma_device_type": 1 00:07:53.116 } 00:07:53.116 ], 00:07:53.116 "driver_specific": { 00:07:53.116 "nvme": [ 00:07:53.116 { 00:07:53.116 "trid": { 00:07:53.116 "trtype": "TCP", 00:07:53.116 "adrfam": "IPv4", 00:07:53.116 "traddr": "10.0.0.3", 00:07:53.116 "trsvcid": "4420", 00:07:53.116 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:53.116 }, 00:07:53.116 "ctrlr_data": { 00:07:53.116 "cntlid": 1, 00:07:53.116 "vendor_id": "0x8086", 00:07:53.116 "model_number": "SPDK bdev Controller", 00:07:53.116 "serial_number": "SPDK0", 00:07:53.116 "firmware_revision": "25.01", 00:07:53.116 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:53.116 "oacs": { 00:07:53.116 "security": 0, 00:07:53.116 "format": 0, 00:07:53.116 "firmware": 0, 00:07:53.116 "ns_manage": 0 00:07:53.116 }, 00:07:53.116 "multi_ctrlr": true, 00:07:53.116 "ana_reporting": false 00:07:53.116 }, 00:07:53.116 "vs": { 00:07:53.116 "nvme_version": "1.3" 00:07:53.116 }, 00:07:53.116 "ns_data": { 00:07:53.116 "id": 1, 00:07:53.116 "can_share": true 00:07:53.116 } 00:07:53.116 } 00:07:53.116 ], 00:07:53.116 "mp_policy": "active_passive" 00:07:53.116 } 00:07:53.116 } 00:07:53.116 ] 00:07:53.116 09:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:53.116 09:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63366 00:07:53.116 09:22:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:53.116 Running I/O for 10 seconds... 00:07:54.492 Latency(us) 00:07:54.492 [2024-10-16T09:22:18.896Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:54.492 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:54.492 Nvme0n1 : 1.00 6223.00 24.31 0.00 0.00 0.00 0.00 0.00 00:07:54.492 [2024-10-16T09:22:18.896Z] =================================================================================================================== 00:07:54.492 [2024-10-16T09:22:18.896Z] Total : 6223.00 24.31 0.00 0.00 0.00 0.00 0.00 00:07:54.492 00:07:55.059 09:22:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2704bd39-56bc-45e7-bb9e-ae8d75c67eea 00:07:55.318 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:55.318 Nvme0n1 : 2.00 6286.50 24.56 0.00 0.00 0.00 0.00 0.00 00:07:55.318 [2024-10-16T09:22:19.722Z] =================================================================================================================== 00:07:55.318 [2024-10-16T09:22:19.722Z] Total : 6286.50 24.56 0.00 0.00 0.00 0.00 0.00 00:07:55.318 00:07:55.582 true 00:07:55.582 09:22:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2704bd39-56bc-45e7-bb9e-ae8d75c67eea 00:07:55.582 09:22:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:55.840 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:55.840 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:55.840 09:22:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 63366 00:07:56.408 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:56.408 Nvme0n1 : 3.00 6392.33 24.97 0.00 0.00 0.00 0.00 0.00 00:07:56.408 [2024-10-16T09:22:20.812Z] =================================================================================================================== 00:07:56.408 [2024-10-16T09:22:20.812Z] Total : 6392.33 24.97 0.00 0.00 0.00 0.00 0.00 00:07:56.408 00:07:57.345 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:57.345 Nvme0n1 : 4.00 6413.50 25.05 0.00 0.00 0.00 0.00 0.00 00:07:57.345 [2024-10-16T09:22:21.749Z] =================================================================================================================== 00:07:57.345 [2024-10-16T09:22:21.749Z] Total : 6413.50 25.05 0.00 0.00 0.00 0.00 0.00 00:07:57.345 00:07:58.281 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:58.281 Nvme0n1 : 5.00 6400.80 25.00 0.00 0.00 0.00 0.00 0.00 00:07:58.281 [2024-10-16T09:22:22.685Z] =================================================================================================================== 00:07:58.281 [2024-10-16T09:22:22.685Z] Total : 6400.80 25.00 0.00 0.00 0.00 0.00 0.00 00:07:58.281 00:07:59.217 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.217 Nvme0n1 : 6.00 6303.50 24.62 0.00 0.00 0.00 0.00 0.00 00:07:59.217 [2024-10-16T09:22:23.621Z] =================================================================================================================== 00:07:59.217 [2024-10-16T09:22:23.621Z] Total : 6303.50 24.62 0.00 0.00 0.00 0.00 0.00 00:07:59.217 00:08:00.152 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.152 Nvme0n1 : 7.00 6310.14 24.65 0.00 0.00 0.00 0.00 0.00 00:08:00.152 [2024-10-16T09:22:24.556Z] =================================================================================================================== 00:08:00.152 [2024-10-16T09:22:24.556Z] Total : 6310.14 24.65 0.00 0.00 0.00 0.00 0.00 00:08:00.152 00:08:01.529 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.529 Nvme0n1 : 8.00 6299.25 24.61 0.00 0.00 0.00 0.00 0.00 00:08:01.529 [2024-10-16T09:22:25.933Z] =================================================================================================================== 00:08:01.529 [2024-10-16T09:22:25.933Z] Total : 6299.25 24.61 0.00 0.00 0.00 0.00 0.00 00:08:01.529 00:08:02.520 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.520 Nvme0n1 : 9.00 6304.89 24.63 0.00 0.00 0.00 0.00 0.00 00:08:02.520 [2024-10-16T09:22:26.924Z] =================================================================================================================== 00:08:02.520 [2024-10-16T09:22:26.924Z] Total : 6304.89 24.63 0.00 0.00 0.00 0.00 0.00 00:08:02.520 00:08:03.456 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:03.456 Nvme0n1 : 10.00 6322.10 24.70 0.00 0.00 0.00 0.00 0.00 00:08:03.456 [2024-10-16T09:22:27.860Z] =================================================================================================================== 00:08:03.456 [2024-10-16T09:22:27.860Z] Total : 6322.10 24.70 0.00 0.00 0.00 0.00 0.00 00:08:03.456 00:08:03.456 00:08:03.456 Latency(us) 00:08:03.456 [2024-10-16T09:22:27.860Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:03.456 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:03.456 Nvme0n1 : 10.00 6320.08 24.69 0.00 0.00 20242.77 15847.80 115819.99 00:08:03.456 [2024-10-16T09:22:27.860Z] =================================================================================================================== 00:08:03.456 [2024-10-16T09:22:27.860Z] Total : 6320.08 24.69 0.00 0.00 20242.77 15847.80 115819.99 00:08:03.456 { 00:08:03.456 "results": [ 00:08:03.456 { 00:08:03.456 "job": "Nvme0n1", 00:08:03.456 "core_mask": "0x2", 00:08:03.456 "workload": "randwrite", 00:08:03.456 "status": "finished", 00:08:03.456 "queue_depth": 128, 00:08:03.456 "io_size": 4096, 00:08:03.456 "runtime": 10.003351, 00:08:03.456 "iops": 6320.082140474727, 00:08:03.456 "mibps": 24.6878208612294, 00:08:03.456 "io_failed": 0, 00:08:03.456 "io_timeout": 0, 00:08:03.456 "avg_latency_us": 20242.76769386951, 00:08:03.456 "min_latency_us": 15847.796363636364, 00:08:03.456 "max_latency_us": 115819.98545454546 00:08:03.456 } 00:08:03.456 ], 00:08:03.456 "core_count": 1 00:08:03.456 } 00:08:03.456 09:22:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63350 00:08:03.456 09:22:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 63350 ']' 00:08:03.456 09:22:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 63350 00:08:03.456 09:22:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:08:03.456 09:22:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:03.456 09:22:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63350 00:08:03.456 09:22:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:03.456 09:22:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:03.456 killing process with pid 63350 00:08:03.456 09:22:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63350' 00:08:03.456 09:22:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 63350 00:08:03.456 Received shutdown signal, test time was about 10.000000 seconds 00:08:03.456 00:08:03.456 Latency(us) 00:08:03.456 [2024-10-16T09:22:27.860Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:03.456 [2024-10-16T09:22:27.860Z] =================================================================================================================== 00:08:03.456 [2024-10-16T09:22:27.860Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:03.456 09:22:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 63350 00:08:03.456 09:22:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:03.715 09:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:03.973 09:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2704bd39-56bc-45e7-bb9e-ae8d75c67eea 00:08:03.973 09:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:04.232 09:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:04.232 09:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:04.232 09:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:04.491 [2024-10-16 09:22:28.850179] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:04.491 09:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2704bd39-56bc-45e7-bb9e-ae8d75c67eea 00:08:04.491 09:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:08:04.491 09:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2704bd39-56bc-45e7-bb9e-ae8d75c67eea 00:08:04.491 09:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:04.491 09:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.491 09:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:04.491 09:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.491 09:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:04.491 09:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.491 09:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:04.491 09:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:04.491 09:22:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2704bd39-56bc-45e7-bb9e-ae8d75c67eea 00:08:04.750 request: 00:08:04.750 { 00:08:04.750 "uuid": "2704bd39-56bc-45e7-bb9e-ae8d75c67eea", 00:08:04.750 "method": "bdev_lvol_get_lvstores", 00:08:04.750 "req_id": 1 00:08:04.750 } 00:08:04.750 Got JSON-RPC error response 00:08:04.750 response: 00:08:04.750 { 00:08:04.750 "code": -19, 00:08:04.750 "message": "No such device" 00:08:04.750 } 00:08:05.008 09:22:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:08:05.008 09:22:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:05.008 09:22:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:05.008 09:22:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:05.008 09:22:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:05.008 aio_bdev 00:08:05.008 09:22:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 34efcfa8-c69d-4e60-9d2a-3334e90519a3 00:08:05.008 09:22:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=34efcfa8-c69d-4e60-9d2a-3334e90519a3 00:08:05.008 09:22:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:05.008 09:22:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:08:05.008 09:22:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:05.008 09:22:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:05.266 09:22:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:05.266 09:22:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 34efcfa8-c69d-4e60-9d2a-3334e90519a3 -t 2000 00:08:05.524 [ 00:08:05.524 { 00:08:05.524 "name": "34efcfa8-c69d-4e60-9d2a-3334e90519a3", 00:08:05.524 "aliases": [ 00:08:05.524 "lvs/lvol" 00:08:05.524 ], 00:08:05.524 "product_name": "Logical Volume", 00:08:05.524 "block_size": 4096, 00:08:05.524 "num_blocks": 38912, 00:08:05.524 "uuid": "34efcfa8-c69d-4e60-9d2a-3334e90519a3", 00:08:05.524 "assigned_rate_limits": { 00:08:05.524 "rw_ios_per_sec": 0, 00:08:05.524 "rw_mbytes_per_sec": 0, 00:08:05.524 "r_mbytes_per_sec": 0, 00:08:05.524 "w_mbytes_per_sec": 0 00:08:05.524 }, 00:08:05.524 "claimed": false, 00:08:05.524 "zoned": false, 00:08:05.524 "supported_io_types": { 00:08:05.524 "read": true, 00:08:05.524 "write": true, 00:08:05.524 "unmap": true, 00:08:05.524 "flush": false, 00:08:05.524 "reset": true, 00:08:05.524 "nvme_admin": false, 00:08:05.524 "nvme_io": false, 00:08:05.524 "nvme_io_md": false, 00:08:05.524 "write_zeroes": true, 00:08:05.524 "zcopy": false, 00:08:05.524 "get_zone_info": false, 00:08:05.524 "zone_management": false, 00:08:05.524 "zone_append": false, 00:08:05.524 "compare": false, 00:08:05.524 "compare_and_write": false, 00:08:05.524 "abort": false, 00:08:05.524 "seek_hole": true, 00:08:05.524 "seek_data": true, 00:08:05.524 "copy": false, 00:08:05.524 "nvme_iov_md": false 00:08:05.524 }, 00:08:05.524 "driver_specific": { 00:08:05.524 "lvol": { 00:08:05.524 "lvol_store_uuid": "2704bd39-56bc-45e7-bb9e-ae8d75c67eea", 00:08:05.524 "base_bdev": "aio_bdev", 00:08:05.524 "thin_provision": false, 00:08:05.524 "num_allocated_clusters": 38, 00:08:05.524 "snapshot": false, 00:08:05.524 "clone": false, 00:08:05.524 "esnap_clone": false 00:08:05.524 } 00:08:05.524 } 00:08:05.524 } 00:08:05.524 ] 00:08:05.782 09:22:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:08:05.782 09:22:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2704bd39-56bc-45e7-bb9e-ae8d75c67eea 00:08:05.782 09:22:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:06.040 09:22:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:06.040 09:22:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2704bd39-56bc-45e7-bb9e-ae8d75c67eea 00:08:06.040 09:22:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:06.299 09:22:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:06.299 09:22:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 34efcfa8-c69d-4e60-9d2a-3334e90519a3 00:08:06.556 09:22:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2704bd39-56bc-45e7-bb9e-ae8d75c67eea 00:08:06.814 09:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:07.072 09:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:07.330 ************************************ 00:08:07.330 END TEST lvs_grow_clean 00:08:07.330 ************************************ 00:08:07.330 00:08:07.330 real 0m17.786s 00:08:07.330 user 0m16.631s 00:08:07.330 sys 0m2.484s 00:08:07.330 09:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:07.330 09:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:07.330 09:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:07.330 09:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:07.330 09:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:07.330 09:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:07.330 ************************************ 00:08:07.330 START TEST lvs_grow_dirty 00:08:07.330 ************************************ 00:08:07.330 09:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:08:07.330 09:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:07.330 09:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:07.330 09:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:07.330 09:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:07.330 09:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:07.330 09:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:07.330 09:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:07.330 09:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:07.330 09:22:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:07.896 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:07.896 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:08.154 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=325ed3c2-5634-4055-9244-4023c78a298a 00:08:08.154 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:08.154 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 325ed3c2-5634-4055-9244-4023c78a298a 00:08:08.431 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:08.431 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:08.431 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 325ed3c2-5634-4055-9244-4023c78a298a lvol 150 00:08:08.703 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=8a5fc166-f368-468b-b613-8659176af3c1 00:08:08.703 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:08.703 09:22:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:08.961 [2024-10-16 09:22:33.109468] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:08.961 [2024-10-16 09:22:33.109804] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:08.961 true 00:08:08.961 09:22:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:08.961 09:22:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 325ed3c2-5634-4055-9244-4023c78a298a 00:08:09.219 09:22:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:09.219 09:22:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:09.219 09:22:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8a5fc166-f368-468b-b613-8659176af3c1 00:08:09.786 09:22:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:09.786 [2024-10-16 09:22:34.146125] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:09.786 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:10.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:10.044 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63614 00:08:10.044 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:10.044 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:10.044 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63614 /var/tmp/bdevperf.sock 00:08:10.044 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 63614 ']' 00:08:10.044 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:10.044 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:10.044 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:10.044 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:10.044 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:10.302 [2024-10-16 09:22:34.493791] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:08:10.302 [2024-10-16 09:22:34.494075] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63614 ] 00:08:10.302 [2024-10-16 09:22:34.629911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.302 [2024-10-16 09:22:34.684189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.560 [2024-10-16 09:22:34.741929] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:10.560 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:10.560 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:10.560 09:22:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:10.818 Nvme0n1 00:08:10.818 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:11.076 [ 00:08:11.076 { 00:08:11.076 "name": "Nvme0n1", 00:08:11.076 "aliases": [ 00:08:11.076 "8a5fc166-f368-468b-b613-8659176af3c1" 00:08:11.076 ], 00:08:11.076 "product_name": "NVMe disk", 00:08:11.076 "block_size": 4096, 00:08:11.076 "num_blocks": 38912, 00:08:11.076 "uuid": "8a5fc166-f368-468b-b613-8659176af3c1", 00:08:11.076 "numa_id": -1, 00:08:11.076 "assigned_rate_limits": { 00:08:11.076 "rw_ios_per_sec": 0, 00:08:11.076 "rw_mbytes_per_sec": 0, 00:08:11.076 "r_mbytes_per_sec": 0, 00:08:11.076 "w_mbytes_per_sec": 0 00:08:11.076 }, 00:08:11.076 "claimed": false, 00:08:11.076 "zoned": false, 00:08:11.076 "supported_io_types": { 00:08:11.076 "read": true, 00:08:11.076 "write": true, 00:08:11.076 "unmap": true, 00:08:11.076 "flush": true, 00:08:11.076 "reset": true, 00:08:11.076 "nvme_admin": true, 00:08:11.076 "nvme_io": true, 00:08:11.076 "nvme_io_md": false, 00:08:11.076 "write_zeroes": true, 00:08:11.076 "zcopy": false, 00:08:11.076 "get_zone_info": false, 00:08:11.076 "zone_management": false, 00:08:11.076 "zone_append": false, 00:08:11.076 "compare": true, 00:08:11.076 "compare_and_write": true, 00:08:11.076 "abort": true, 00:08:11.076 "seek_hole": false, 00:08:11.076 "seek_data": false, 00:08:11.076 "copy": true, 00:08:11.076 "nvme_iov_md": false 00:08:11.076 }, 00:08:11.076 "memory_domains": [ 00:08:11.076 { 00:08:11.076 "dma_device_id": "system", 00:08:11.076 "dma_device_type": 1 00:08:11.076 } 00:08:11.077 ], 00:08:11.077 "driver_specific": { 00:08:11.077 "nvme": [ 00:08:11.077 { 00:08:11.077 "trid": { 00:08:11.077 "trtype": "TCP", 00:08:11.077 "adrfam": "IPv4", 00:08:11.077 "traddr": "10.0.0.3", 00:08:11.077 "trsvcid": "4420", 00:08:11.077 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:11.077 }, 00:08:11.077 "ctrlr_data": { 00:08:11.077 "cntlid": 1, 00:08:11.077 "vendor_id": "0x8086", 00:08:11.077 "model_number": "SPDK bdev Controller", 00:08:11.077 "serial_number": "SPDK0", 00:08:11.077 "firmware_revision": "25.01", 00:08:11.077 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:11.077 "oacs": { 00:08:11.077 "security": 0, 00:08:11.077 "format": 0, 00:08:11.077 "firmware": 0, 00:08:11.077 "ns_manage": 0 00:08:11.077 }, 00:08:11.077 "multi_ctrlr": true, 00:08:11.077 "ana_reporting": false 00:08:11.077 }, 00:08:11.077 "vs": { 00:08:11.077 "nvme_version": "1.3" 00:08:11.077 }, 00:08:11.077 "ns_data": { 00:08:11.077 "id": 1, 00:08:11.077 "can_share": true 00:08:11.077 } 00:08:11.077 } 00:08:11.077 ], 00:08:11.077 "mp_policy": "active_passive" 00:08:11.077 } 00:08:11.077 } 00:08:11.077 ] 00:08:11.077 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:11.077 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63629 00:08:11.077 09:22:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:11.077 Running I/O for 10 seconds... 00:08:12.453 Latency(us) 00:08:12.453 [2024-10-16T09:22:36.857Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:12.453 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:12.453 Nvme0n1 : 1.00 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:08:12.453 [2024-10-16T09:22:36.857Z] =================================================================================================================== 00:08:12.453 [2024-10-16T09:22:36.857Z] Total : 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:08:12.453 00:08:13.022 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 325ed3c2-5634-4055-9244-4023c78a298a 00:08:13.283 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:13.283 Nvme0n1 : 2.00 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:08:13.283 [2024-10-16T09:22:37.687Z] =================================================================================================================== 00:08:13.283 [2024-10-16T09:22:37.687Z] Total : 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:08:13.283 00:08:13.283 true 00:08:13.283 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 325ed3c2-5634-4055-9244-4023c78a298a 00:08:13.283 09:22:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:13.851 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:13.851 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:13.851 09:22:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 63629 00:08:14.110 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.110 Nvme0n1 : 3.00 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:08:14.110 [2024-10-16T09:22:38.514Z] =================================================================================================================== 00:08:14.110 [2024-10-16T09:22:38.514Z] Total : 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:08:14.110 00:08:15.500 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.500 Nvme0n1 : 4.00 6445.25 25.18 0.00 0.00 0.00 0.00 0.00 00:08:15.500 [2024-10-16T09:22:39.904Z] =================================================================================================================== 00:08:15.500 [2024-10-16T09:22:39.904Z] Total : 6445.25 25.18 0.00 0.00 0.00 0.00 0.00 00:08:15.500 00:08:16.079 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.079 Nvme0n1 : 5.00 6426.20 25.10 0.00 0.00 0.00 0.00 0.00 00:08:16.079 [2024-10-16T09:22:40.483Z] =================================================================================================================== 00:08:16.079 [2024-10-16T09:22:40.483Z] Total : 6426.20 25.10 0.00 0.00 0.00 0.00 0.00 00:08:16.079 00:08:17.454 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.454 Nvme0n1 : 6.00 6413.50 25.05 0.00 0.00 0.00 0.00 0.00 00:08:17.454 [2024-10-16T09:22:41.858Z] =================================================================================================================== 00:08:17.454 [2024-10-16T09:22:41.858Z] Total : 6413.50 25.05 0.00 0.00 0.00 0.00 0.00 00:08:17.454 00:08:18.388 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:18.388 Nvme0n1 : 7.00 6386.29 24.95 0.00 0.00 0.00 0.00 0.00 00:08:18.388 [2024-10-16T09:22:42.792Z] =================================================================================================================== 00:08:18.388 [2024-10-16T09:22:42.792Z] Total : 6386.29 24.95 0.00 0.00 0.00 0.00 0.00 00:08:18.388 00:08:19.323 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.323 Nvme0n1 : 8.00 6223.50 24.31 0.00 0.00 0.00 0.00 0.00 00:08:19.323 [2024-10-16T09:22:43.727Z] =================================================================================================================== 00:08:19.323 [2024-10-16T09:22:43.727Z] Total : 6223.50 24.31 0.00 0.00 0.00 0.00 0.00 00:08:19.323 00:08:20.260 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.260 Nvme0n1 : 9.00 6251.67 24.42 0.00 0.00 0.00 0.00 0.00 00:08:20.260 [2024-10-16T09:22:44.664Z] =================================================================================================================== 00:08:20.260 [2024-10-16T09:22:44.664Z] Total : 6251.67 24.42 0.00 0.00 0.00 0.00 0.00 00:08:20.260 00:08:21.197 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.197 Nvme0n1 : 10.00 6274.20 24.51 0.00 0.00 0.00 0.00 0.00 00:08:21.197 [2024-10-16T09:22:45.601Z] =================================================================================================================== 00:08:21.197 [2024-10-16T09:22:45.601Z] Total : 6274.20 24.51 0.00 0.00 0.00 0.00 0.00 00:08:21.197 00:08:21.197 00:08:21.198 Latency(us) 00:08:21.198 [2024-10-16T09:22:45.602Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:21.198 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.198 Nvme0n1 : 10.02 6273.64 24.51 0.00 0.00 20396.55 5213.09 221154.21 00:08:21.198 [2024-10-16T09:22:45.602Z] =================================================================================================================== 00:08:21.198 [2024-10-16T09:22:45.602Z] Total : 6273.64 24.51 0.00 0.00 20396.55 5213.09 221154.21 00:08:21.198 { 00:08:21.198 "results": [ 00:08:21.198 { 00:08:21.198 "job": "Nvme0n1", 00:08:21.198 "core_mask": "0x2", 00:08:21.198 "workload": "randwrite", 00:08:21.198 "status": "finished", 00:08:21.198 "queue_depth": 128, 00:08:21.198 "io_size": 4096, 00:08:21.198 "runtime": 10.021303, 00:08:21.198 "iops": 6273.6352747741485, 00:08:21.198 "mibps": 24.506387792086517, 00:08:21.198 "io_failed": 0, 00:08:21.198 "io_timeout": 0, 00:08:21.198 "avg_latency_us": 20396.545769423195, 00:08:21.198 "min_latency_us": 5213.090909090909, 00:08:21.198 "max_latency_us": 221154.2109090909 00:08:21.198 } 00:08:21.198 ], 00:08:21.198 "core_count": 1 00:08:21.198 } 00:08:21.198 09:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63614 00:08:21.198 09:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 63614 ']' 00:08:21.198 09:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 63614 00:08:21.198 09:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:08:21.198 09:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:21.198 09:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63614 00:08:21.198 killing process with pid 63614 00:08:21.198 Received shutdown signal, test time was about 10.000000 seconds 00:08:21.198 00:08:21.198 Latency(us) 00:08:21.198 [2024-10-16T09:22:45.602Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:21.198 [2024-10-16T09:22:45.602Z] =================================================================================================================== 00:08:21.198 [2024-10-16T09:22:45.602Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:21.198 09:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:21.198 09:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:21.198 09:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63614' 00:08:21.198 09:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 63614 00:08:21.198 09:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 63614 00:08:21.456 09:22:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:21.715 09:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:21.974 09:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 325ed3c2-5634-4055-9244-4023c78a298a 00:08:21.974 09:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:22.234 09:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:22.234 09:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:22.234 09:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 63275 00:08:22.234 09:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 63275 00:08:22.234 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 63275 Killed "${NVMF_APP[@]}" "$@" 00:08:22.234 09:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:22.234 09:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:22.234 09:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:22.234 09:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:22.234 09:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:22.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.493 09:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=63767 00:08:22.493 09:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 63767 00:08:22.493 09:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:22.493 09:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 63767 ']' 00:08:22.493 09:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.493 09:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:22.493 09:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.493 09:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:22.493 09:22:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:22.493 [2024-10-16 09:22:46.702142] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:08:22.493 [2024-10-16 09:22:46.702487] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:22.493 [2024-10-16 09:22:46.845488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.753 [2024-10-16 09:22:46.906386] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:22.753 [2024-10-16 09:22:46.906726] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:22.753 [2024-10-16 09:22:46.906968] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:22.753 [2024-10-16 09:22:46.906989] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:22.753 [2024-10-16 09:22:46.907000] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:22.753 [2024-10-16 09:22:46.907467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.753 [2024-10-16 09:22:46.965072] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:23.318 09:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:23.319 09:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:23.319 09:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:23.319 09:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:23.319 09:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:23.319 09:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:23.319 09:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:23.577 [2024-10-16 09:22:47.928104] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:23.577 [2024-10-16 09:22:47.928720] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:23.577 [2024-10-16 09:22:47.929084] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:23.577 09:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:23.577 09:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 8a5fc166-f368-468b-b613-8659176af3c1 00:08:23.577 09:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=8a5fc166-f368-468b-b613-8659176af3c1 00:08:23.577 09:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:23.577 09:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:23.577 09:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:23.577 09:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:23.577 09:22:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:24.145 09:22:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8a5fc166-f368-468b-b613-8659176af3c1 -t 2000 00:08:24.403 [ 00:08:24.403 { 00:08:24.403 "name": "8a5fc166-f368-468b-b613-8659176af3c1", 00:08:24.403 "aliases": [ 00:08:24.403 "lvs/lvol" 00:08:24.403 ], 00:08:24.403 "product_name": "Logical Volume", 00:08:24.403 "block_size": 4096, 00:08:24.403 "num_blocks": 38912, 00:08:24.403 "uuid": "8a5fc166-f368-468b-b613-8659176af3c1", 00:08:24.403 "assigned_rate_limits": { 00:08:24.403 "rw_ios_per_sec": 0, 00:08:24.403 "rw_mbytes_per_sec": 0, 00:08:24.403 "r_mbytes_per_sec": 0, 00:08:24.403 "w_mbytes_per_sec": 0 00:08:24.403 }, 00:08:24.403 "claimed": false, 00:08:24.403 "zoned": false, 00:08:24.403 "supported_io_types": { 00:08:24.403 "read": true, 00:08:24.403 "write": true, 00:08:24.403 "unmap": true, 00:08:24.403 "flush": false, 00:08:24.403 "reset": true, 00:08:24.403 "nvme_admin": false, 00:08:24.403 "nvme_io": false, 00:08:24.403 "nvme_io_md": false, 00:08:24.403 "write_zeroes": true, 00:08:24.403 "zcopy": false, 00:08:24.403 "get_zone_info": false, 00:08:24.403 "zone_management": false, 00:08:24.403 "zone_append": false, 00:08:24.403 "compare": false, 00:08:24.403 "compare_and_write": false, 00:08:24.403 "abort": false, 00:08:24.403 "seek_hole": true, 00:08:24.403 "seek_data": true, 00:08:24.403 "copy": false, 00:08:24.403 "nvme_iov_md": false 00:08:24.403 }, 00:08:24.403 "driver_specific": { 00:08:24.403 "lvol": { 00:08:24.403 "lvol_store_uuid": "325ed3c2-5634-4055-9244-4023c78a298a", 00:08:24.403 "base_bdev": "aio_bdev", 00:08:24.403 "thin_provision": false, 00:08:24.403 "num_allocated_clusters": 38, 00:08:24.403 "snapshot": false, 00:08:24.403 "clone": false, 00:08:24.403 "esnap_clone": false 00:08:24.403 } 00:08:24.403 } 00:08:24.403 } 00:08:24.403 ] 00:08:24.403 09:22:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:24.403 09:22:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:24.403 09:22:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 325ed3c2-5634-4055-9244-4023c78a298a 00:08:24.662 09:22:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:24.662 09:22:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 325ed3c2-5634-4055-9244-4023c78a298a 00:08:24.662 09:22:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:24.920 09:22:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:24.920 09:22:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:25.180 [2024-10-16 09:22:49.370058] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:25.180 09:22:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 325ed3c2-5634-4055-9244-4023c78a298a 00:08:25.180 09:22:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:08:25.180 09:22:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 325ed3c2-5634-4055-9244-4023c78a298a 00:08:25.180 09:22:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:25.180 09:22:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:25.180 09:22:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:25.180 09:22:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:25.180 09:22:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:25.180 09:22:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:25.180 09:22:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:25.180 09:22:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:25.180 09:22:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 325ed3c2-5634-4055-9244-4023c78a298a 00:08:25.439 request: 00:08:25.439 { 00:08:25.439 "uuid": "325ed3c2-5634-4055-9244-4023c78a298a", 00:08:25.439 "method": "bdev_lvol_get_lvstores", 00:08:25.439 "req_id": 1 00:08:25.439 } 00:08:25.439 Got JSON-RPC error response 00:08:25.439 response: 00:08:25.439 { 00:08:25.439 "code": -19, 00:08:25.439 "message": "No such device" 00:08:25.439 } 00:08:25.439 09:22:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:08:25.439 09:22:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:25.439 09:22:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:25.439 09:22:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:25.439 09:22:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:25.698 aio_bdev 00:08:25.698 09:22:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8a5fc166-f368-468b-b613-8659176af3c1 00:08:25.698 09:22:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=8a5fc166-f368-468b-b613-8659176af3c1 00:08:25.698 09:22:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:25.698 09:22:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:25.698 09:22:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:25.698 09:22:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:25.698 09:22:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:25.969 09:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8a5fc166-f368-468b-b613-8659176af3c1 -t 2000 00:08:25.969 [ 00:08:25.969 { 00:08:25.969 "name": "8a5fc166-f368-468b-b613-8659176af3c1", 00:08:25.969 "aliases": [ 00:08:25.969 "lvs/lvol" 00:08:25.969 ], 00:08:25.969 "product_name": "Logical Volume", 00:08:25.969 "block_size": 4096, 00:08:25.969 "num_blocks": 38912, 00:08:25.969 "uuid": "8a5fc166-f368-468b-b613-8659176af3c1", 00:08:25.969 "assigned_rate_limits": { 00:08:25.969 "rw_ios_per_sec": 0, 00:08:25.969 "rw_mbytes_per_sec": 0, 00:08:25.969 "r_mbytes_per_sec": 0, 00:08:25.969 "w_mbytes_per_sec": 0 00:08:25.969 }, 00:08:25.969 "claimed": false, 00:08:25.969 "zoned": false, 00:08:25.969 "supported_io_types": { 00:08:25.969 "read": true, 00:08:25.969 "write": true, 00:08:25.969 "unmap": true, 00:08:25.969 "flush": false, 00:08:25.969 "reset": true, 00:08:25.969 "nvme_admin": false, 00:08:25.969 "nvme_io": false, 00:08:25.969 "nvme_io_md": false, 00:08:25.969 "write_zeroes": true, 00:08:25.969 "zcopy": false, 00:08:25.969 "get_zone_info": false, 00:08:25.969 "zone_management": false, 00:08:25.969 "zone_append": false, 00:08:25.969 "compare": false, 00:08:25.969 "compare_and_write": false, 00:08:25.969 "abort": false, 00:08:25.969 "seek_hole": true, 00:08:25.969 "seek_data": true, 00:08:25.969 "copy": false, 00:08:25.969 "nvme_iov_md": false 00:08:25.969 }, 00:08:25.969 "driver_specific": { 00:08:25.969 "lvol": { 00:08:25.969 "lvol_store_uuid": "325ed3c2-5634-4055-9244-4023c78a298a", 00:08:25.969 "base_bdev": "aio_bdev", 00:08:25.969 "thin_provision": false, 00:08:25.969 "num_allocated_clusters": 38, 00:08:25.969 "snapshot": false, 00:08:25.969 "clone": false, 00:08:25.969 "esnap_clone": false 00:08:25.969 } 00:08:25.969 } 00:08:25.969 } 00:08:25.969 ] 00:08:25.969 09:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:25.969 09:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 325ed3c2-5634-4055-9244-4023c78a298a 00:08:25.969 09:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:26.535 09:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:26.535 09:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 325ed3c2-5634-4055-9244-4023c78a298a 00:08:26.535 09:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:26.535 09:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:26.535 09:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 8a5fc166-f368-468b-b613-8659176af3c1 00:08:26.793 09:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 325ed3c2-5634-4055-9244-4023c78a298a 00:08:27.052 09:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:27.311 09:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:27.570 ************************************ 00:08:27.570 END TEST lvs_grow_dirty 00:08:27.570 ************************************ 00:08:27.570 00:08:27.570 real 0m20.294s 00:08:27.570 user 0m39.823s 00:08:27.570 sys 0m9.244s 00:08:27.570 09:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:27.570 09:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:27.829 09:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:27.829 09:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:08:27.829 09:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:08:27.829 09:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:08:27.829 09:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:27.829 09:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:08:27.829 09:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:08:27.829 09:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:08:27.829 09:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:27.829 nvmf_trace.0 00:08:27.829 09:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:08:27.829 09:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:27.829 09:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:27.829 09:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:28.396 09:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:28.396 09:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:28.396 09:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:28.396 09:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:28.396 rmmod nvme_tcp 00:08:28.396 rmmod nvme_fabrics 00:08:28.396 rmmod nvme_keyring 00:08:28.396 09:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:28.396 09:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:28.396 09:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:28.397 09:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 63767 ']' 00:08:28.397 09:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 63767 00:08:28.397 09:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 63767 ']' 00:08:28.397 09:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 63767 00:08:28.397 09:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:08:28.397 09:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:28.397 09:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63767 00:08:28.397 killing process with pid 63767 00:08:28.397 09:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:28.397 09:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:28.397 09:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63767' 00:08:28.397 09:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 63767 00:08:28.397 09:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 63767 00:08:28.655 09:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:28.655 09:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:28.655 09:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:28.655 09:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:28.655 09:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:08:28.655 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:28.655 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:08:28.655 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:28.655 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:28.655 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:28.655 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:28.655 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:28.655 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:28.914 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:28.914 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:28.914 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:28.914 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:28.914 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:28.914 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:28.914 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:28.914 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:28.914 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:28.914 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:28.914 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.914 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:28.914 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:28.914 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:08:28.914 ************************************ 00:08:28.914 END TEST nvmf_lvs_grow 00:08:28.914 ************************************ 00:08:28.914 00:08:28.914 real 0m40.761s 00:08:28.914 user 1m3.371s 00:08:28.914 sys 0m12.977s 00:08:28.914 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:28.914 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:28.914 09:22:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:28.914 09:22:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:28.914 09:22:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:28.914 09:22:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:28.914 ************************************ 00:08:28.914 START TEST nvmf_bdev_io_wait 00:08:28.914 ************************************ 00:08:28.914 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:29.174 * Looking for test storage... 00:08:29.174 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:29.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.174 --rc genhtml_branch_coverage=1 00:08:29.174 --rc genhtml_function_coverage=1 00:08:29.174 --rc genhtml_legend=1 00:08:29.174 --rc geninfo_all_blocks=1 00:08:29.174 --rc geninfo_unexecuted_blocks=1 00:08:29.174 00:08:29.174 ' 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:29.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.174 --rc genhtml_branch_coverage=1 00:08:29.174 --rc genhtml_function_coverage=1 00:08:29.174 --rc genhtml_legend=1 00:08:29.174 --rc geninfo_all_blocks=1 00:08:29.174 --rc geninfo_unexecuted_blocks=1 00:08:29.174 00:08:29.174 ' 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:29.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.174 --rc genhtml_branch_coverage=1 00:08:29.174 --rc genhtml_function_coverage=1 00:08:29.174 --rc genhtml_legend=1 00:08:29.174 --rc geninfo_all_blocks=1 00:08:29.174 --rc geninfo_unexecuted_blocks=1 00:08:29.174 00:08:29.174 ' 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:29.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.174 --rc genhtml_branch_coverage=1 00:08:29.174 --rc genhtml_function_coverage=1 00:08:29.174 --rc genhtml_legend=1 00:08:29.174 --rc geninfo_all_blocks=1 00:08:29.174 --rc geninfo_unexecuted_blocks=1 00:08:29.174 00:08:29.174 ' 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5989d9e2-d339-420e-a2f4-bd87604f111f 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:29.174 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:29.175 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:29.175 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:29.175 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:29.175 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:29.175 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:29.175 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:29.175 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:29.175 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:29.175 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:29.175 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:29.175 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:29.175 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:29.175 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.175 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:29.175 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.175 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:08:29.175 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:08:29.175 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:08:29.175 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:08:29.175 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:08:29.175 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # nvmf_veth_init 00:08:29.175 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:29.175 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:29.175 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:29.175 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:29.175 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:29.175 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:29.175 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:29.175 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:29.175 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:29.175 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:29.175 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:29.175 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:29.175 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:29.175 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:29.175 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:29.175 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:29.175 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:29.175 Cannot find device "nvmf_init_br" 00:08:29.175 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:08:29.175 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:29.175 Cannot find device "nvmf_init_br2" 00:08:29.175 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:08:29.175 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:29.175 Cannot find device "nvmf_tgt_br" 00:08:29.175 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:08:29.175 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:29.175 Cannot find device "nvmf_tgt_br2" 00:08:29.175 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:08:29.175 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:29.434 Cannot find device "nvmf_init_br" 00:08:29.434 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:08:29.434 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:29.434 Cannot find device "nvmf_init_br2" 00:08:29.434 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:08:29.434 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:29.434 Cannot find device "nvmf_tgt_br" 00:08:29.434 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:08:29.434 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:29.434 Cannot find device "nvmf_tgt_br2" 00:08:29.434 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:08:29.434 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:29.434 Cannot find device "nvmf_br" 00:08:29.434 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:08:29.434 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:29.434 Cannot find device "nvmf_init_if" 00:08:29.434 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:08:29.434 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:29.434 Cannot find device "nvmf_init_if2" 00:08:29.434 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:08:29.434 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:29.434 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:29.434 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:08:29.434 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:29.434 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:29.434 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:08:29.434 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:29.434 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:29.434 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:29.434 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:29.434 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:29.434 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:29.434 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:29.434 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:29.434 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:29.434 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:29.434 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:29.434 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:29.434 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:29.434 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:29.434 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:29.434 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:29.434 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:29.434 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:29.434 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:29.435 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:29.435 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:29.693 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:29.693 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:29.693 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:29.693 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:29.693 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:29.693 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:29.693 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:29.693 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:29.693 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:29.693 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:29.693 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:29.693 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:29.693 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:29.693 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:08:29.693 00:08:29.693 --- 10.0.0.3 ping statistics --- 00:08:29.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.693 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:08:29.693 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:29.693 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:29.693 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:08:29.693 00:08:29.693 --- 10.0.0.4 ping statistics --- 00:08:29.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.693 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:08:29.693 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:29.693 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:29.693 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:08:29.693 00:08:29.693 --- 10.0.0.1 ping statistics --- 00:08:29.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.693 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:08:29.693 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:29.693 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:29.693 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:08:29.693 00:08:29.693 --- 10.0.0.2 ping statistics --- 00:08:29.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.693 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:08:29.693 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:29.693 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # return 0 00:08:29.693 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:29.693 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:29.693 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:29.693 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:29.693 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:29.693 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:29.693 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:29.693 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:29.693 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:29.693 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:29.693 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:29.693 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=64145 00:08:29.694 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 64145 00:08:29.694 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:29.694 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 64145 ']' 00:08:29.694 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.694 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:29.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.694 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.694 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:29.694 09:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:29.694 [2024-10-16 09:22:54.015303] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:08:29.694 [2024-10-16 09:22:54.015405] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:29.952 [2024-10-16 09:22:54.156141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:29.952 [2024-10-16 09:22:54.221717] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:29.952 [2024-10-16 09:22:54.221794] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:29.952 [2024-10-16 09:22:54.221808] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:29.952 [2024-10-16 09:22:54.221819] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:29.952 [2024-10-16 09:22:54.221831] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:29.952 [2024-10-16 09:22:54.223372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:29.953 [2024-10-16 09:22:54.223516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:29.953 [2024-10-16 09:22:54.223649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:29.953 [2024-10-16 09:22:54.223664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.953 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:29.953 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:08:29.953 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:29.953 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:29.953 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:29.953 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:29.953 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:29.953 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.953 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:29.953 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.953 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:29.953 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.953 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:30.212 [2024-10-16 09:22:54.398311] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:30.212 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.212 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:30.212 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.212 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:30.212 [2024-10-16 09:22:54.412698] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:30.212 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.212 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:30.212 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.212 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:30.212 Malloc0 00:08:30.212 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.212 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:30.212 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.212 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:30.212 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.212 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:30.212 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.212 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:30.212 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.212 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:30.212 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.212 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:30.212 [2024-10-16 09:22:54.480297] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:30.212 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.212 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=64167 00:08:30.212 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:30.212 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:30.212 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=64169 00:08:30.212 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:30.212 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:30.212 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:30.212 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:30.212 { 00:08:30.212 "params": { 00:08:30.212 "name": "Nvme$subsystem", 00:08:30.212 "trtype": "$TEST_TRANSPORT", 00:08:30.212 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:30.212 "adrfam": "ipv4", 00:08:30.212 "trsvcid": "$NVMF_PORT", 00:08:30.212 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:30.212 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:30.212 "hdgst": ${hdgst:-false}, 00:08:30.212 "ddgst": ${ddgst:-false} 00:08:30.212 }, 00:08:30.212 "method": "bdev_nvme_attach_controller" 00:08:30.212 } 00:08:30.212 EOF 00:08:30.212 )") 00:08:30.212 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:30.212 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=64171 00:08:30.212 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:30.212 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:30.212 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:30.212 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:30.212 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:30.212 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:30.212 { 00:08:30.212 "params": { 00:08:30.212 "name": "Nvme$subsystem", 00:08:30.212 "trtype": "$TEST_TRANSPORT", 00:08:30.212 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:30.212 "adrfam": "ipv4", 00:08:30.212 "trsvcid": "$NVMF_PORT", 00:08:30.212 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:30.212 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:30.213 "hdgst": ${hdgst:-false}, 00:08:30.213 "ddgst": ${ddgst:-false} 00:08:30.213 }, 00:08:30.213 "method": "bdev_nvme_attach_controller" 00:08:30.213 } 00:08:30.213 EOF 00:08:30.213 )") 00:08:30.213 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:30.213 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=64174 00:08:30.213 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:30.213 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:30.213 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:30.213 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:30.213 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:30.213 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:30.213 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:30.213 { 00:08:30.213 "params": { 00:08:30.213 "name": "Nvme$subsystem", 00:08:30.213 "trtype": "$TEST_TRANSPORT", 00:08:30.213 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:30.213 "adrfam": "ipv4", 00:08:30.213 "trsvcid": "$NVMF_PORT", 00:08:30.213 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:30.213 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:30.213 "hdgst": ${hdgst:-false}, 00:08:30.213 "ddgst": ${ddgst:-false} 00:08:30.213 }, 00:08:30.213 "method": "bdev_nvme_attach_controller" 00:08:30.213 } 00:08:30.213 EOF 00:08:30.213 )") 00:08:30.213 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:30.213 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:30.213 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:30.213 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:30.213 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:30.213 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:30.213 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:30.213 "params": { 00:08:30.213 "name": "Nvme1", 00:08:30.213 "trtype": "tcp", 00:08:30.213 "traddr": "10.0.0.3", 00:08:30.213 "adrfam": "ipv4", 00:08:30.213 "trsvcid": "4420", 00:08:30.213 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:30.213 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:30.213 "hdgst": false, 00:08:30.213 "ddgst": false 00:08:30.213 }, 00:08:30.213 "method": "bdev_nvme_attach_controller" 00:08:30.213 }' 00:08:30.213 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:30.213 "params": { 00:08:30.213 "name": "Nvme1", 00:08:30.213 "trtype": "tcp", 00:08:30.213 "traddr": "10.0.0.3", 00:08:30.213 "adrfam": "ipv4", 00:08:30.213 "trsvcid": "4420", 00:08:30.213 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:30.213 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:30.213 "hdgst": false, 00:08:30.213 "ddgst": false 00:08:30.213 }, 00:08:30.213 "method": "bdev_nvme_attach_controller" 00:08:30.213 }' 00:08:30.213 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:30.213 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:30.213 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:30.213 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:30.213 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:30.213 { 00:08:30.213 "params": { 00:08:30.213 "name": "Nvme$subsystem", 00:08:30.213 "trtype": "$TEST_TRANSPORT", 00:08:30.213 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:30.213 "adrfam": "ipv4", 00:08:30.213 "trsvcid": "$NVMF_PORT", 00:08:30.213 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:30.213 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:30.213 "hdgst": ${hdgst:-false}, 00:08:30.213 "ddgst": ${ddgst:-false} 00:08:30.213 }, 00:08:30.213 "method": "bdev_nvme_attach_controller" 00:08:30.213 } 00:08:30.213 EOF 00:08:30.213 )") 00:08:30.213 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:30.213 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:30.213 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:30.213 "params": { 00:08:30.213 "name": "Nvme1", 00:08:30.213 "trtype": "tcp", 00:08:30.213 "traddr": "10.0.0.3", 00:08:30.213 "adrfam": "ipv4", 00:08:30.213 "trsvcid": "4420", 00:08:30.213 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:30.213 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:30.213 "hdgst": false, 00:08:30.213 "ddgst": false 00:08:30.213 }, 00:08:30.213 "method": "bdev_nvme_attach_controller" 00:08:30.213 }' 00:08:30.213 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:30.213 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:30.213 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:30.213 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:30.213 "params": { 00:08:30.213 "name": "Nvme1", 00:08:30.213 "trtype": "tcp", 00:08:30.213 "traddr": "10.0.0.3", 00:08:30.213 "adrfam": "ipv4", 00:08:30.213 "trsvcid": "4420", 00:08:30.213 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:30.213 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:30.213 "hdgst": false, 00:08:30.213 "ddgst": false 00:08:30.213 }, 00:08:30.213 "method": "bdev_nvme_attach_controller" 00:08:30.213 }' 00:08:30.213 [2024-10-16 09:22:54.547389] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:08:30.213 [2024-10-16 09:22:54.547387] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:08:30.213 [2024-10-16 09:22:54.547481] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:30.213 [2024-10-16 09:22:54.547638] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:30.213 09:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 64167 00:08:30.213 [2024-10-16 09:22:54.568199] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:08:30.213 [2024-10-16 09:22:54.568277] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:30.213 [2024-10-16 09:22:54.572108] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:08:30.213 [2024-10-16 09:22:54.572184] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:30.472 [2024-10-16 09:22:54.763684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.472 [2024-10-16 09:22:54.836358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:30.472 [2024-10-16 09:22:54.841680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.472 [2024-10-16 09:22:54.850123] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:30.731 [2024-10-16 09:22:54.902604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:30.732 [2024-10-16 09:22:54.912224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.732 [2024-10-16 09:22:54.916281] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:30.732 [2024-10-16 09:22:54.975082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:30.732 [2024-10-16 09:22:54.988979] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:30.732 [2024-10-16 09:22:55.006708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.732 Running I/O for 1 seconds... 00:08:30.732 Running I/O for 1 seconds... 00:08:30.732 [2024-10-16 09:22:55.066073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:30.732 [2024-10-16 09:22:55.079915] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:30.732 Running I/O for 1 seconds... 00:08:30.990 Running I/O for 1 seconds... 00:08:31.927 4540.00 IOPS, 17.73 MiB/s [2024-10-16T09:22:56.331Z] 180584.00 IOPS, 705.41 MiB/s 00:08:31.927 Latency(us) 00:08:31.927 [2024-10-16T09:22:56.331Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:31.927 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:31.927 Nvme1n1 : 1.00 180213.37 703.96 0.00 0.00 706.50 374.23 2025.66 00:08:31.927 [2024-10-16T09:22:56.331Z] =================================================================================================================== 00:08:31.927 [2024-10-16T09:22:56.331Z] Total : 180213.37 703.96 0.00 0.00 706.50 374.23 2025.66 00:08:31.927 00:08:31.927 Latency(us) 00:08:31.927 [2024-10-16T09:22:56.331Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:31.927 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:31.927 Nvme1n1 : 1.03 4517.01 17.64 0.00 0.00 27752.25 6791.91 49569.05 00:08:31.927 [2024-10-16T09:22:56.331Z] =================================================================================================================== 00:08:31.927 [2024-10-16T09:22:56.331Z] Total : 4517.01 17.64 0.00 0.00 27752.25 6791.91 49569.05 00:08:31.927 4267.00 IOPS, 16.67 MiB/s 00:08:31.927 Latency(us) 00:08:31.927 [2024-10-16T09:22:56.331Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:31.927 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:31.927 Nvme1n1 : 1.01 4382.00 17.12 0.00 0.00 29090.96 7864.32 52428.80 00:08:31.927 [2024-10-16T09:22:56.331Z] =================================================================================================================== 00:08:31.927 [2024-10-16T09:22:56.331Z] Total : 4382.00 17.12 0.00 0.00 29090.96 7864.32 52428.80 00:08:31.927 6254.00 IOPS, 24.43 MiB/s 00:08:31.927 Latency(us) 00:08:31.927 [2024-10-16T09:22:56.331Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:31.927 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:31.927 Nvme1n1 : 1.01 6313.22 24.66 0.00 0.00 20152.81 8877.15 28955.00 00:08:31.927 [2024-10-16T09:22:56.331Z] =================================================================================================================== 00:08:31.927 [2024-10-16T09:22:56.331Z] Total : 6313.22 24.66 0.00 0.00 20152.81 8877.15 28955.00 00:08:31.927 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 64169 00:08:32.187 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 64171 00:08:32.187 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 64174 00:08:32.187 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:32.187 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.187 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:32.187 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.187 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:32.187 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:32.187 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:32.187 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:32.187 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:32.187 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:32.187 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:32.187 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:32.187 rmmod nvme_tcp 00:08:32.187 rmmod nvme_fabrics 00:08:32.187 rmmod nvme_keyring 00:08:32.187 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:32.187 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:32.187 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:32.187 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 64145 ']' 00:08:32.187 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 64145 00:08:32.187 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 64145 ']' 00:08:32.187 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 64145 00:08:32.187 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:08:32.187 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:32.187 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64145 00:08:32.187 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:32.187 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:32.187 killing process with pid 64145 00:08:32.187 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64145' 00:08:32.187 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 64145 00:08:32.187 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 64145 00:08:32.446 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:32.446 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:32.446 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:32.446 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:32.446 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:08:32.446 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:32.446 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:08:32.446 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:32.446 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:32.446 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:32.446 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:32.446 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:32.446 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:32.446 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:32.446 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:32.446 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:32.446 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:32.705 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:32.705 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:32.705 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:32.705 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:32.705 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:32.705 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:32.705 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:32.705 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:32.705 09:22:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:32.705 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:08:32.705 00:08:32.705 real 0m3.718s 00:08:32.705 user 0m14.690s 00:08:32.705 sys 0m2.241s 00:08:32.705 ************************************ 00:08:32.705 END TEST nvmf_bdev_io_wait 00:08:32.705 ************************************ 00:08:32.705 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:32.705 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:32.705 09:22:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:32.705 09:22:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:32.705 09:22:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:32.705 09:22:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:32.705 ************************************ 00:08:32.705 START TEST nvmf_queue_depth 00:08:32.705 ************************************ 00:08:32.705 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:32.965 * Looking for test storage... 00:08:32.965 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:32.965 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:32.965 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:08:32.965 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:32.965 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:32.965 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:32.965 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:32.965 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:32.965 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:32.965 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:32.965 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:32.965 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:32.965 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:32.965 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:32.965 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:32.965 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:32.965 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:32.965 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:32.965 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:32.965 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:32.965 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:32.965 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:32.965 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:32.965 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:32.965 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:32.965 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:32.965 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:32.965 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:32.965 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:32.965 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:32.965 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:32.965 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:32.965 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:32.965 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:32.965 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:32.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.965 --rc genhtml_branch_coverage=1 00:08:32.965 --rc genhtml_function_coverage=1 00:08:32.965 --rc genhtml_legend=1 00:08:32.965 --rc geninfo_all_blocks=1 00:08:32.965 --rc geninfo_unexecuted_blocks=1 00:08:32.965 00:08:32.965 ' 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:32.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.966 --rc genhtml_branch_coverage=1 00:08:32.966 --rc genhtml_function_coverage=1 00:08:32.966 --rc genhtml_legend=1 00:08:32.966 --rc geninfo_all_blocks=1 00:08:32.966 --rc geninfo_unexecuted_blocks=1 00:08:32.966 00:08:32.966 ' 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:32.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.966 --rc genhtml_branch_coverage=1 00:08:32.966 --rc genhtml_function_coverage=1 00:08:32.966 --rc genhtml_legend=1 00:08:32.966 --rc geninfo_all_blocks=1 00:08:32.966 --rc geninfo_unexecuted_blocks=1 00:08:32.966 00:08:32.966 ' 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:32.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.966 --rc genhtml_branch_coverage=1 00:08:32.966 --rc genhtml_function_coverage=1 00:08:32.966 --rc genhtml_legend=1 00:08:32.966 --rc geninfo_all_blocks=1 00:08:32.966 --rc geninfo_unexecuted_blocks=1 00:08:32.966 00:08:32.966 ' 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5989d9e2-d339-420e-a2f4-bd87604f111f 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:32.966 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@458 -- # nvmf_veth_init 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:32.966 Cannot find device "nvmf_init_br" 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:32.966 Cannot find device "nvmf_init_br2" 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:32.966 Cannot find device "nvmf_tgt_br" 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:32.966 Cannot find device "nvmf_tgt_br2" 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:08:32.966 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:32.966 Cannot find device "nvmf_init_br" 00:08:32.967 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:08:32.967 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:32.967 Cannot find device "nvmf_init_br2" 00:08:32.967 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:08:32.967 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:32.967 Cannot find device "nvmf_tgt_br" 00:08:32.967 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:08:32.967 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:33.225 Cannot find device "nvmf_tgt_br2" 00:08:33.225 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:08:33.225 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:33.225 Cannot find device "nvmf_br" 00:08:33.225 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:08:33.225 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:33.225 Cannot find device "nvmf_init_if" 00:08:33.225 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:08:33.225 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:33.225 Cannot find device "nvmf_init_if2" 00:08:33.225 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:08:33.225 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:33.225 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:33.225 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:08:33.225 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:33.225 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:33.225 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:08:33.225 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:33.225 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:33.225 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:33.225 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:33.225 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:33.225 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:33.225 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:33.225 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:33.225 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:33.225 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:33.225 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:33.225 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:33.225 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:33.225 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:33.225 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:33.225 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:33.225 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:33.225 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:33.225 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:33.225 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:33.225 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:33.225 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:33.225 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:33.225 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:33.507 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:33.507 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:33.507 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:33.507 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:33.507 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:33.507 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:33.507 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:33.507 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:33.507 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:33.507 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:33.507 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:08:33.507 00:08:33.507 --- 10.0.0.3 ping statistics --- 00:08:33.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.507 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:08:33.507 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:33.507 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:33.507 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:08:33.507 00:08:33.507 --- 10.0.0.4 ping statistics --- 00:08:33.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.507 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:08:33.507 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:33.507 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:33.507 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:08:33.507 00:08:33.507 --- 10.0.0.1 ping statistics --- 00:08:33.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.507 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:08:33.507 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:33.507 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:33.507 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:08:33.507 00:08:33.507 --- 10.0.0.2 ping statistics --- 00:08:33.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.507 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:08:33.507 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:33.507 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # return 0 00:08:33.507 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:33.507 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:33.507 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:33.507 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:33.507 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:33.507 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:33.507 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:33.507 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:33.507 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:33.507 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:33.507 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:33.507 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=64433 00:08:33.507 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:33.507 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 64433 00:08:33.507 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 64433 ']' 00:08:33.507 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.507 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:33.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.507 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.507 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:33.507 09:22:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:33.507 [2024-10-16 09:22:57.773623] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:08:33.507 [2024-10-16 09:22:57.773716] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:33.792 [2024-10-16 09:22:57.913642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.792 [2024-10-16 09:22:57.969139] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:33.792 [2024-10-16 09:22:57.969201] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:33.792 [2024-10-16 09:22:57.969216] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:33.792 [2024-10-16 09:22:57.969226] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:33.792 [2024-10-16 09:22:57.969235] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:33.792 [2024-10-16 09:22:57.969689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:33.792 [2024-10-16 09:22:58.027522] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:33.792 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:33.792 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:33.792 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:33.792 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:33.792 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:33.792 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:33.792 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:33.792 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.792 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:33.792 [2024-10-16 09:22:58.140836] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:33.792 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.792 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:33.792 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.792 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:33.792 Malloc0 00:08:33.792 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.792 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:33.792 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.792 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:33.792 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.792 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:33.792 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.792 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:33.792 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.792 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:33.792 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.792 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:33.792 [2024-10-16 09:22:58.193918] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:34.051 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.051 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=64463 00:08:34.051 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:34.051 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:34.051 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 64463 /var/tmp/bdevperf.sock 00:08:34.051 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 64463 ']' 00:08:34.051 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:34.051 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:34.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:34.051 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:34.051 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:34.051 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:34.051 [2024-10-16 09:22:58.246249] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:08:34.051 [2024-10-16 09:22:58.246327] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64463 ] 00:08:34.051 [2024-10-16 09:22:58.382905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.051 [2024-10-16 09:22:58.438305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.310 [2024-10-16 09:22:58.496180] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:34.310 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:34.310 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:34.310 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:34.310 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.310 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:34.310 NVMe0n1 00:08:34.310 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.310 09:22:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:34.568 Running I/O for 10 seconds... 00:08:36.439 6852.00 IOPS, 26.77 MiB/s [2024-10-16T09:23:01.780Z] 7187.00 IOPS, 28.07 MiB/s [2024-10-16T09:23:03.158Z] 7511.67 IOPS, 29.34 MiB/s [2024-10-16T09:23:04.114Z] 7759.25 IOPS, 30.31 MiB/s [2024-10-16T09:23:05.060Z] 8006.60 IOPS, 31.28 MiB/s [2024-10-16T09:23:05.997Z] 8223.67 IOPS, 32.12 MiB/s [2024-10-16T09:23:06.933Z] 8420.29 IOPS, 32.89 MiB/s [2024-10-16T09:23:07.870Z] 8472.88 IOPS, 33.10 MiB/s [2024-10-16T09:23:08.806Z] 8547.78 IOPS, 33.39 MiB/s [2024-10-16T09:23:09.066Z] 8617.10 IOPS, 33.66 MiB/s 00:08:44.662 Latency(us) 00:08:44.662 [2024-10-16T09:23:09.066Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:44.662 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:44.662 Verification LBA range: start 0x0 length 0x4000 00:08:44.662 NVMe0n1 : 10.08 8648.20 33.78 0.00 0.00 117892.15 25499.46 92465.34 00:08:44.662 [2024-10-16T09:23:09.066Z] =================================================================================================================== 00:08:44.662 [2024-10-16T09:23:09.066Z] Total : 8648.20 33.78 0.00 0.00 117892.15 25499.46 92465.34 00:08:44.662 { 00:08:44.662 "results": [ 00:08:44.662 { 00:08:44.662 "job": "NVMe0n1", 00:08:44.662 "core_mask": "0x1", 00:08:44.662 "workload": "verify", 00:08:44.662 "status": "finished", 00:08:44.662 "verify_range": { 00:08:44.662 "start": 0, 00:08:44.662 "length": 16384 00:08:44.662 }, 00:08:44.662 "queue_depth": 1024, 00:08:44.662 "io_size": 4096, 00:08:44.662 "runtime": 10.08164, 00:08:44.662 "iops": 8648.196126820636, 00:08:44.662 "mibps": 33.78201612039311, 00:08:44.662 "io_failed": 0, 00:08:44.662 "io_timeout": 0, 00:08:44.662 "avg_latency_us": 117892.15244733429, 00:08:44.662 "min_latency_us": 25499.46181818182, 00:08:44.662 "max_latency_us": 92465.33818181818 00:08:44.662 } 00:08:44.662 ], 00:08:44.662 "core_count": 1 00:08:44.662 } 00:08:44.662 09:23:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 64463 00:08:44.662 09:23:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 64463 ']' 00:08:44.662 09:23:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 64463 00:08:44.662 09:23:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:44.662 09:23:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:44.662 09:23:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64463 00:08:44.662 09:23:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:44.662 09:23:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:44.662 killing process with pid 64463 00:08:44.662 09:23:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64463' 00:08:44.662 09:23:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 64463 00:08:44.662 Received shutdown signal, test time was about 10.000000 seconds 00:08:44.662 00:08:44.662 Latency(us) 00:08:44.662 [2024-10-16T09:23:09.066Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:44.662 [2024-10-16T09:23:09.066Z] =================================================================================================================== 00:08:44.662 [2024-10-16T09:23:09.066Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:44.662 09:23:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 64463 00:08:44.921 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:44.921 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:44.921 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:44.921 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:44.921 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:44.921 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:44.921 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:44.921 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:44.921 rmmod nvme_tcp 00:08:44.921 rmmod nvme_fabrics 00:08:44.921 rmmod nvme_keyring 00:08:44.921 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:44.921 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:44.921 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:44.921 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 64433 ']' 00:08:44.921 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 64433 00:08:44.921 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 64433 ']' 00:08:44.921 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 64433 00:08:44.921 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:44.921 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:44.921 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64433 00:08:44.921 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:44.921 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:44.921 killing process with pid 64433 00:08:44.921 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64433' 00:08:44.921 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 64433 00:08:44.921 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 64433 00:08:45.179 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:45.179 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:45.179 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:45.179 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:45.179 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:45.179 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:08:45.179 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:08:45.179 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:45.179 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:45.179 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:45.179 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:45.179 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:45.179 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:45.179 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:45.179 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:45.179 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:45.179 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:45.179 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:45.179 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:45.179 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:45.438 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:45.438 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:45.438 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:45.438 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.438 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:45.438 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.438 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:08:45.438 00:08:45.438 real 0m12.612s 00:08:45.438 user 0m21.197s 00:08:45.438 sys 0m2.333s 00:08:45.438 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:45.438 ************************************ 00:08:45.438 END TEST nvmf_queue_depth 00:08:45.438 ************************************ 00:08:45.438 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:45.438 09:23:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:45.438 09:23:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:45.438 09:23:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:45.438 09:23:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:45.438 ************************************ 00:08:45.438 START TEST nvmf_target_multipath 00:08:45.438 ************************************ 00:08:45.438 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:45.438 * Looking for test storage... 00:08:45.438 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:45.438 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:45.438 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:08:45.438 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:45.697 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:45.697 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:45.697 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:45.697 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:45.697 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:45.697 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:45.697 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:45.697 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:45.697 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:45.697 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:45.697 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:45.697 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:45.697 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:45.697 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:45.697 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:45.697 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:45.697 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:45.697 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:45.697 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:45.697 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:45.697 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:45.697 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:45.697 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:45.697 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:45.697 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:45.697 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:45.697 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:45.697 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:45.697 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:45.697 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:45.697 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:45.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.697 --rc genhtml_branch_coverage=1 00:08:45.697 --rc genhtml_function_coverage=1 00:08:45.697 --rc genhtml_legend=1 00:08:45.697 --rc geninfo_all_blocks=1 00:08:45.697 --rc geninfo_unexecuted_blocks=1 00:08:45.697 00:08:45.697 ' 00:08:45.697 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:45.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.697 --rc genhtml_branch_coverage=1 00:08:45.697 --rc genhtml_function_coverage=1 00:08:45.697 --rc genhtml_legend=1 00:08:45.697 --rc geninfo_all_blocks=1 00:08:45.697 --rc geninfo_unexecuted_blocks=1 00:08:45.697 00:08:45.697 ' 00:08:45.697 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:45.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.697 --rc genhtml_branch_coverage=1 00:08:45.697 --rc genhtml_function_coverage=1 00:08:45.697 --rc genhtml_legend=1 00:08:45.697 --rc geninfo_all_blocks=1 00:08:45.697 --rc geninfo_unexecuted_blocks=1 00:08:45.697 00:08:45.697 ' 00:08:45.697 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:45.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.697 --rc genhtml_branch_coverage=1 00:08:45.697 --rc genhtml_function_coverage=1 00:08:45.697 --rc genhtml_legend=1 00:08:45.697 --rc geninfo_all_blocks=1 00:08:45.697 --rc geninfo_unexecuted_blocks=1 00:08:45.697 00:08:45.697 ' 00:08:45.697 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:45.697 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:45.697 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:45.697 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:45.697 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:45.697 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:45.697 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:45.697 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:45.697 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:45.697 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:45.697 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:45.697 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:45.697 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:08:45.697 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5989d9e2-d339-420e-a2f4-bd87604f111f 00:08:45.697 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:45.697 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:45.697 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:45.697 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:45.698 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@458 -- # nvmf_veth_init 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:45.698 Cannot find device "nvmf_init_br" 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:45.698 Cannot find device "nvmf_init_br2" 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:45.698 Cannot find device "nvmf_tgt_br" 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:08:45.698 09:23:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:45.698 Cannot find device "nvmf_tgt_br2" 00:08:45.698 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:08:45.698 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:45.698 Cannot find device "nvmf_init_br" 00:08:45.698 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:08:45.698 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:45.698 Cannot find device "nvmf_init_br2" 00:08:45.698 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:08:45.698 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:45.698 Cannot find device "nvmf_tgt_br" 00:08:45.698 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:08:45.698 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:45.698 Cannot find device "nvmf_tgt_br2" 00:08:45.698 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:08:45.698 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:45.698 Cannot find device "nvmf_br" 00:08:45.698 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:08:45.698 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:45.698 Cannot find device "nvmf_init_if" 00:08:45.698 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:08:45.698 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:45.698 Cannot find device "nvmf_init_if2" 00:08:45.698 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:08:45.698 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:45.698 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:45.698 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:08:45.698 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:45.698 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:45.698 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:08:45.957 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:45.957 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:45.957 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:45.957 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:45.957 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:45.957 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:45.957 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:45.957 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:45.957 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:45.957 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:45.957 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:45.957 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:45.957 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:45.957 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:45.957 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:45.957 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:45.957 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:45.957 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:45.957 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:45.957 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:45.957 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:45.957 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:45.957 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:45.957 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:45.957 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:45.957 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:45.957 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:45.957 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:45.957 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:45.957 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:45.957 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:45.957 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:45.957 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:45.957 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:45.957 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:08:45.957 00:08:45.957 --- 10.0.0.3 ping statistics --- 00:08:45.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.958 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:08:45.958 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:45.958 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:45.958 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:08:45.958 00:08:45.958 --- 10.0.0.4 ping statistics --- 00:08:45.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.958 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:08:45.958 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:45.958 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:45.958 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:08:45.958 00:08:45.958 --- 10.0.0.1 ping statistics --- 00:08:45.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.958 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:08:45.958 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:45.958 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:45.958 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:08:45.958 00:08:45.958 --- 10.0.0.2 ping statistics --- 00:08:45.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.958 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:08:45.958 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:45.958 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # return 0 00:08:45.958 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:45.958 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:45.958 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:45.958 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:45.958 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:45.958 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:45.958 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:45.958 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:08:45.958 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:08:45.958 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:08:45.958 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:45.958 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:45.958 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:45.958 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # nvmfpid=64828 00:08:45.958 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:45.958 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # waitforlisten 64828 00:08:45.958 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@831 -- # '[' -z 64828 ']' 00:08:45.958 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.958 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:45.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.958 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.958 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:45.958 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:46.216 [2024-10-16 09:23:10.420226] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:08:46.216 [2024-10-16 09:23:10.420316] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:46.216 [2024-10-16 09:23:10.561447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:46.486 [2024-10-16 09:23:10.626239] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:46.486 [2024-10-16 09:23:10.626502] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:46.486 [2024-10-16 09:23:10.626687] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:46.486 [2024-10-16 09:23:10.626871] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:46.486 [2024-10-16 09:23:10.626915] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:46.486 [2024-10-16 09:23:10.628248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:46.486 [2024-10-16 09:23:10.628402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:46.486 [2024-10-16 09:23:10.628943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:46.486 [2024-10-16 09:23:10.628981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.486 [2024-10-16 09:23:10.687173] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:46.486 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:46.486 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # return 0 00:08:46.486 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:46.486 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:46.486 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:46.486 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:46.486 09:23:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:46.753 [2024-10-16 09:23:11.083133] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:46.753 09:23:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:08:47.320 Malloc0 00:08:47.320 09:23:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:08:47.320 09:23:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:47.578 09:23:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:47.836 [2024-10-16 09:23:12.231086] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:48.094 09:23:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:08:48.094 [2024-10-16 09:23:12.475217] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:08:48.094 09:23:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid=5989d9e2-d339-420e-a2f4-bd87604f111f -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:08:48.353 09:23:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid=5989d9e2-d339-420e-a2f4-bd87604f111f -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:08:48.353 09:23:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:08:48.353 09:23:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:08:48.353 09:23:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:48.353 09:23:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:48.353 09:23:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:08:50.884 09:23:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:50.884 09:23:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:50.884 09:23:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:50.884 09:23:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:50.884 09:23:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:50.884 09:23:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:08:50.884 09:23:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:08:50.884 09:23:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:08:50.884 09:23:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:08:50.884 09:23:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:50.884 09:23:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:08:50.884 09:23:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:08:50.884 09:23:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:08:50.884 09:23:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:08:50.884 09:23:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:08:50.884 09:23:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:08:50.884 09:23:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:08:50.884 09:23:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:08:50.884 09:23:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:08:50.884 09:23:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:08:50.884 09:23:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:50.884 09:23:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:50.884 09:23:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:50.884 09:23:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:50.884 09:23:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:50.884 09:23:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:08:50.884 09:23:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:50.884 09:23:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:50.884 09:23:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:50.884 09:23:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:50.884 09:23:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:50.884 09:23:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:08:50.884 09:23:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=64910 00:08:50.884 09:23:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:50.884 09:23:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:08:50.884 [global] 00:08:50.884 thread=1 00:08:50.884 invalidate=1 00:08:50.884 rw=randrw 00:08:50.884 time_based=1 00:08:50.884 runtime=6 00:08:50.884 ioengine=libaio 00:08:50.884 direct=1 00:08:50.884 bs=4096 00:08:50.884 iodepth=128 00:08:50.884 norandommap=0 00:08:50.884 numjobs=1 00:08:50.884 00:08:50.884 verify_dump=1 00:08:50.884 verify_backlog=512 00:08:50.884 verify_state_save=0 00:08:50.884 do_verify=1 00:08:50.885 verify=crc32c-intel 00:08:50.885 [job0] 00:08:50.885 filename=/dev/nvme0n1 00:08:50.885 Could not set queue depth (nvme0n1) 00:08:50.885 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:50.885 fio-3.35 00:08:50.885 Starting 1 thread 00:08:51.451 09:23:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:51.710 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:08:52.276 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:08:52.276 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:52.276 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:52.276 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:52.276 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:52.276 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:52.276 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:08:52.276 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:52.276 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:52.276 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:52.276 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:52.276 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:52.276 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:52.533 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:08:52.793 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:08:52.793 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:52.793 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:52.793 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:52.793 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:52.793 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:52.793 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:08:52.793 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:52.793 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:52.793 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:52.793 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:52.793 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:52.793 09:23:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 64910 00:08:56.978 00:08:56.978 job0: (groupid=0, jobs=1): err= 0: pid=64931: Wed Oct 16 09:23:21 2024 00:08:56.978 read: IOPS=10.5k, BW=41.0MiB/s (42.9MB/s)(246MiB/6003msec) 00:08:56.978 slat (usec): min=4, max=8386, avg=56.57, stdev=223.75 00:08:56.978 clat (usec): min=1076, max=16335, avg=8351.83, stdev=1471.06 00:08:56.978 lat (usec): min=1106, max=16367, avg=8408.40, stdev=1475.40 00:08:56.978 clat percentiles (usec): 00:08:56.978 | 1.00th=[ 4080], 5.00th=[ 6325], 10.00th=[ 7111], 20.00th=[ 7635], 00:08:56.978 | 30.00th=[ 7832], 40.00th=[ 8029], 50.00th=[ 8225], 60.00th=[ 8455], 00:08:56.978 | 70.00th=[ 8586], 80.00th=[ 8979], 90.00th=[ 9634], 95.00th=[11731], 00:08:56.978 | 99.00th=[12911], 99.50th=[13173], 99.90th=[14091], 99.95th=[14222], 00:08:56.978 | 99.99th=[15008] 00:08:56.978 bw ( KiB/s): min= 4896, max=28360, per=51.11%, avg=21433.45, stdev=6969.22, samples=11 00:08:56.978 iops : min= 1224, max= 7090, avg=5358.36, stdev=1742.30, samples=11 00:08:56.978 write: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(127MiB/5312msec); 0 zone resets 00:08:56.978 slat (usec): min=15, max=3163, avg=64.94, stdev=160.55 00:08:56.978 clat (usec): min=808, max=14630, avg=7250.90, stdev=1288.18 00:08:56.978 lat (usec): min=1069, max=14652, avg=7315.85, stdev=1292.60 00:08:56.978 clat percentiles (usec): 00:08:56.978 | 1.00th=[ 3163], 5.00th=[ 4228], 10.00th=[ 5669], 20.00th=[ 6783], 00:08:56.978 | 30.00th=[ 7046], 40.00th=[ 7242], 50.00th=[ 7439], 60.00th=[ 7635], 00:08:56.978 | 70.00th=[ 7832], 80.00th=[ 8029], 90.00th=[ 8291], 95.00th=[ 8586], 00:08:56.978 | 99.00th=[11076], 99.50th=[11600], 99.90th=[12649], 99.95th=[13173], 00:08:56.978 | 99.99th=[13829] 00:08:56.978 bw ( KiB/s): min= 5216, max=28032, per=87.86%, avg=21506.18, stdev=6760.24, samples=11 00:08:56.978 iops : min= 1304, max= 7008, avg=5376.55, stdev=1690.06, samples=11 00:08:56.978 lat (usec) : 1000=0.01% 00:08:56.978 lat (msec) : 2=0.04%, 4=1.87%, 10=92.01%, 20=6.09% 00:08:56.978 cpu : usr=5.91%, sys=20.89%, ctx=5586, majf=0, minf=108 00:08:56.979 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:08:56.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:56.979 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:56.979 issued rwts: total=62938,32507,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:56.979 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:56.979 00:08:56.979 Run status group 0 (all jobs): 00:08:56.979 READ: bw=41.0MiB/s (42.9MB/s), 41.0MiB/s-41.0MiB/s (42.9MB/s-42.9MB/s), io=246MiB (258MB), run=6003-6003msec 00:08:56.979 WRITE: bw=23.9MiB/s (25.1MB/s), 23.9MiB/s-23.9MiB/s (25.1MB/s-25.1MB/s), io=127MiB (133MB), run=5312-5312msec 00:08:56.979 00:08:56.979 Disk stats (read/write): 00:08:56.979 nvme0n1: ios=62078/31912, merge=0/0, ticks=495550/216705, in_queue=712255, util=98.60% 00:08:56.979 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:08:57.237 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:08:57.496 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:08:57.496 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:57.496 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:57.496 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:57.496 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:57.496 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:57.496 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:08:57.496 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:57.496 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:57.496 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:57.496 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:57.496 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:57.496 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:08:57.496 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=65018 00:08:57.496 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:57.496 09:23:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:08:57.496 [global] 00:08:57.496 thread=1 00:08:57.496 invalidate=1 00:08:57.496 rw=randrw 00:08:57.496 time_based=1 00:08:57.496 runtime=6 00:08:57.496 ioengine=libaio 00:08:57.496 direct=1 00:08:57.496 bs=4096 00:08:57.496 iodepth=128 00:08:57.496 norandommap=0 00:08:57.496 numjobs=1 00:08:57.496 00:08:57.496 verify_dump=1 00:08:57.496 verify_backlog=512 00:08:57.496 verify_state_save=0 00:08:57.496 do_verify=1 00:08:57.496 verify=crc32c-intel 00:08:57.496 [job0] 00:08:57.496 filename=/dev/nvme0n1 00:08:57.496 Could not set queue depth (nvme0n1) 00:08:57.755 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:57.755 fio-3.35 00:08:57.755 Starting 1 thread 00:08:58.691 09:23:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:58.691 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:08:58.949 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:08:58.949 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:58.949 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:58.949 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:58.949 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:58.949 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:58.949 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:08:58.949 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:58.949 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:58.949 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:58.949 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:58.949 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:58.949 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:59.207 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:08:59.775 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:08:59.775 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:59.775 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:59.775 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:59.775 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:59.775 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:59.775 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:08:59.775 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:59.775 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:59.775 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:59.775 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:59.775 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:59.775 09:23:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 65018 00:09:04.033 00:09:04.033 job0: (groupid=0, jobs=1): err= 0: pid=65039: Wed Oct 16 09:23:28 2024 00:09:04.033 read: IOPS=11.5k, BW=45.1MiB/s (47.3MB/s)(271MiB/6002msec) 00:09:04.033 slat (usec): min=4, max=7885, avg=41.92, stdev=186.80 00:09:04.033 clat (usec): min=635, max=16050, avg=7536.09, stdev=1832.57 00:09:04.033 lat (usec): min=645, max=16084, avg=7578.01, stdev=1846.91 00:09:04.033 clat percentiles (usec): 00:09:04.033 | 1.00th=[ 3425], 5.00th=[ 4293], 10.00th=[ 4817], 20.00th=[ 5997], 00:09:04.033 | 30.00th=[ 6980], 40.00th=[ 7504], 50.00th=[ 7767], 60.00th=[ 8029], 00:09:04.033 | 70.00th=[ 8291], 80.00th=[ 8586], 90.00th=[ 9110], 95.00th=[10945], 00:09:04.033 | 99.00th=[12649], 99.50th=[12911], 99.90th=[13304], 99.95th=[13698], 00:09:04.033 | 99.99th=[14091] 00:09:04.033 bw ( KiB/s): min=11952, max=37992, per=54.14%, avg=24992.00, stdev=8528.60, samples=11 00:09:04.033 iops : min= 2988, max= 9498, avg=6248.00, stdev=2132.15, samples=11 00:09:04.033 write: IOPS=6863, BW=26.8MiB/s (28.1MB/s)(146MiB/5453msec); 0 zone resets 00:09:04.033 slat (usec): min=14, max=2549, avg=54.32, stdev=136.81 00:09:04.033 clat (usec): min=1394, max=13675, avg=6415.77, stdev=1689.09 00:09:04.033 lat (usec): min=1438, max=13699, avg=6470.09, stdev=1703.12 00:09:04.033 clat percentiles (usec): 00:09:04.033 | 1.00th=[ 2769], 5.00th=[ 3392], 10.00th=[ 3851], 20.00th=[ 4555], 00:09:04.033 | 30.00th=[ 5538], 40.00th=[ 6587], 50.00th=[ 6980], 60.00th=[ 7242], 00:09:04.033 | 70.00th=[ 7439], 80.00th=[ 7701], 90.00th=[ 8029], 95.00th=[ 8356], 00:09:04.033 | 99.00th=[10552], 99.50th=[11207], 99.90th=[12387], 99.95th=[12649], 00:09:04.033 | 99.99th=[13566] 00:09:04.033 bw ( KiB/s): min=12368, max=38368, per=90.95%, avg=24970.91, stdev=8343.68, samples=11 00:09:04.033 iops : min= 3092, max= 9592, avg=6242.73, stdev=2085.92, samples=11 00:09:04.033 lat (usec) : 750=0.01%, 1000=0.02% 00:09:04.033 lat (msec) : 2=0.09%, 4=6.04%, 10=89.39%, 20=4.46% 00:09:04.033 cpu : usr=5.80%, sys=23.31%, ctx=6043, majf=0, minf=102 00:09:04.033 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:09:04.033 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:04.033 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:04.033 issued rwts: total=69263,37426,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:04.033 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:04.033 00:09:04.033 Run status group 0 (all jobs): 00:09:04.033 READ: bw=45.1MiB/s (47.3MB/s), 45.1MiB/s-45.1MiB/s (47.3MB/s-47.3MB/s), io=271MiB (284MB), run=6002-6002msec 00:09:04.033 WRITE: bw=26.8MiB/s (28.1MB/s), 26.8MiB/s-26.8MiB/s (28.1MB/s-28.1MB/s), io=146MiB (153MB), run=5453-5453msec 00:09:04.033 00:09:04.033 Disk stats (read/write): 00:09:04.033 nvme0n1: ios=68241/36895, merge=0/0, ticks=491380/221318, in_queue=712698, util=98.60% 00:09:04.033 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:04.033 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:04.033 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:04.033 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:09:04.033 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:04.033 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:04.033 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:04.033 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:04.033 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:09:04.033 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:04.293 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:09:04.293 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:09:04.293 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:09:04.293 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:09:04.293 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:04.293 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:04.293 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:04.293 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:04.293 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:04.293 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:04.293 rmmod nvme_tcp 00:09:04.293 rmmod nvme_fabrics 00:09:04.293 rmmod nvme_keyring 00:09:04.293 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:04.293 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:04.293 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:04.293 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n 64828 ']' 00:09:04.293 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # killprocess 64828 00:09:04.293 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@950 -- # '[' -z 64828 ']' 00:09:04.293 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # kill -0 64828 00:09:04.293 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # uname 00:09:04.293 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:04.293 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64828 00:09:04.293 killing process with pid 64828 00:09:04.293 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:04.293 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:04.293 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64828' 00:09:04.293 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@969 -- # kill 64828 00:09:04.293 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@974 -- # wait 64828 00:09:04.552 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:04.552 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:04.552 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:04.552 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:04.552 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:09:04.552 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:04.552 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:09:04.552 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:04.552 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:04.552 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:04.552 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:04.552 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:04.552 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:04.552 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:04.552 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:04.552 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:04.552 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:04.552 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:04.552 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:04.552 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:04.552 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:04.811 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:04.811 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:04.811 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:04.811 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:04.811 09:23:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:04.811 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:09:04.811 00:09:04.811 real 0m19.293s 00:09:04.811 user 1m11.187s 00:09:04.811 sys 0m10.092s 00:09:04.811 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:04.811 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:04.811 ************************************ 00:09:04.811 END TEST nvmf_target_multipath 00:09:04.811 ************************************ 00:09:04.811 09:23:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:04.811 09:23:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:04.811 09:23:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:04.811 09:23:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:04.811 ************************************ 00:09:04.811 START TEST nvmf_zcopy 00:09:04.811 ************************************ 00:09:04.811 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:04.811 * Looking for test storage... 00:09:04.811 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:04.811 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:04.811 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:09:04.811 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:05.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.071 --rc genhtml_branch_coverage=1 00:09:05.071 --rc genhtml_function_coverage=1 00:09:05.071 --rc genhtml_legend=1 00:09:05.071 --rc geninfo_all_blocks=1 00:09:05.071 --rc geninfo_unexecuted_blocks=1 00:09:05.071 00:09:05.071 ' 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:05.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.071 --rc genhtml_branch_coverage=1 00:09:05.071 --rc genhtml_function_coverage=1 00:09:05.071 --rc genhtml_legend=1 00:09:05.071 --rc geninfo_all_blocks=1 00:09:05.071 --rc geninfo_unexecuted_blocks=1 00:09:05.071 00:09:05.071 ' 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:05.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.071 --rc genhtml_branch_coverage=1 00:09:05.071 --rc genhtml_function_coverage=1 00:09:05.071 --rc genhtml_legend=1 00:09:05.071 --rc geninfo_all_blocks=1 00:09:05.071 --rc geninfo_unexecuted_blocks=1 00:09:05.071 00:09:05.071 ' 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:05.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.071 --rc genhtml_branch_coverage=1 00:09:05.071 --rc genhtml_function_coverage=1 00:09:05.071 --rc genhtml_legend=1 00:09:05.071 --rc geninfo_all_blocks=1 00:09:05.071 --rc geninfo_unexecuted_blocks=1 00:09:05.071 00:09:05.071 ' 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5989d9e2-d339-420e-a2f4-bd87604f111f 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:05.071 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:05.072 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@458 -- # nvmf_veth_init 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:05.072 Cannot find device "nvmf_init_br" 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:05.072 Cannot find device "nvmf_init_br2" 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:05.072 Cannot find device "nvmf_tgt_br" 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:05.072 Cannot find device "nvmf_tgt_br2" 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:05.072 Cannot find device "nvmf_init_br" 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:05.072 Cannot find device "nvmf_init_br2" 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:05.072 Cannot find device "nvmf_tgt_br" 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:05.072 Cannot find device "nvmf_tgt_br2" 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:05.072 Cannot find device "nvmf_br" 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:05.072 Cannot find device "nvmf_init_if" 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:05.072 Cannot find device "nvmf_init_if2" 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:05.072 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:05.072 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:05.072 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:05.332 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:05.332 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:05.332 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:05.332 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:05.332 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:05.332 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:05.332 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:05.332 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:05.332 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:05.332 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:05.332 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:05.332 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:05.332 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:05.332 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:05.332 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:05.332 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:05.332 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:05.332 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:05.332 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:05.332 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:05.332 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:05.332 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:05.332 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:05.332 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:05.332 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:05.332 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:05.332 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:05.332 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:05.332 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:09:05.332 00:09:05.332 --- 10.0.0.3 ping statistics --- 00:09:05.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.332 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:09:05.332 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:05.332 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:05.332 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:09:05.332 00:09:05.332 --- 10.0.0.4 ping statistics --- 00:09:05.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.332 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:09:05.332 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:05.332 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:05.332 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:09:05.332 00:09:05.332 --- 10.0.0.1 ping statistics --- 00:09:05.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.332 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:09:05.332 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:05.332 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:05.332 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:09:05.332 00:09:05.332 --- 10.0.0.2 ping statistics --- 00:09:05.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.332 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:09:05.332 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:05.332 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # return 0 00:09:05.332 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:05.332 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:05.332 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:05.332 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:05.332 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:05.332 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:05.332 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:05.332 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:05.332 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:05.332 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:05.332 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:05.332 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=65336 00:09:05.332 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:05.332 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 65336 00:09:05.332 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 65336 ']' 00:09:05.332 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.332 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:05.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.332 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.332 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:05.332 09:23:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:05.332 [2024-10-16 09:23:29.732715] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:09:05.332 [2024-10-16 09:23:29.732786] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:05.592 [2024-10-16 09:23:29.865323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.592 [2024-10-16 09:23:29.909606] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:05.592 [2024-10-16 09:23:29.909677] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:05.592 [2024-10-16 09:23:29.909687] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:05.592 [2024-10-16 09:23:29.909694] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:05.592 [2024-10-16 09:23:29.909701] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:05.592 [2024-10-16 09:23:29.910064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:05.592 [2024-10-16 09:23:29.962839] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:05.873 09:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:05.873 09:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:09:05.873 09:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:05.873 09:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:05.873 09:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:05.873 09:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:05.873 09:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:05.873 09:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:05.873 09:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.873 09:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:05.873 [2024-10-16 09:23:30.076194] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:05.873 09:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.873 09:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:05.873 09:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.873 09:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:05.873 09:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.873 09:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:05.873 09:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.873 09:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:05.873 [2024-10-16 09:23:30.092299] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:05.873 09:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.873 09:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:05.873 09:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.873 09:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:05.873 09:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.873 09:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:05.873 09:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.873 09:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:05.873 malloc0 00:09:05.873 09:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.873 09:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:05.873 09:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.873 09:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:05.873 09:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.873 09:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:05.873 09:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:05.873 09:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:09:05.873 09:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:09:05.873 09:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:05.873 09:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:05.873 { 00:09:05.873 "params": { 00:09:05.873 "name": "Nvme$subsystem", 00:09:05.873 "trtype": "$TEST_TRANSPORT", 00:09:05.873 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:05.873 "adrfam": "ipv4", 00:09:05.873 "trsvcid": "$NVMF_PORT", 00:09:05.873 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:05.873 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:05.873 "hdgst": ${hdgst:-false}, 00:09:05.873 "ddgst": ${ddgst:-false} 00:09:05.873 }, 00:09:05.873 "method": "bdev_nvme_attach_controller" 00:09:05.873 } 00:09:05.873 EOF 00:09:05.873 )") 00:09:05.873 09:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:09:05.873 09:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:09:05.873 09:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:09:05.873 09:23:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:05.873 "params": { 00:09:05.873 "name": "Nvme1", 00:09:05.873 "trtype": "tcp", 00:09:05.873 "traddr": "10.0.0.3", 00:09:05.873 "adrfam": "ipv4", 00:09:05.873 "trsvcid": "4420", 00:09:05.873 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:05.873 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:05.873 "hdgst": false, 00:09:05.873 "ddgst": false 00:09:05.873 }, 00:09:05.873 "method": "bdev_nvme_attach_controller" 00:09:05.873 }' 00:09:05.873 [2024-10-16 09:23:30.190701] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:09:05.873 [2024-10-16 09:23:30.190789] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65362 ] 00:09:06.144 [2024-10-16 09:23:30.332422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.144 [2024-10-16 09:23:30.393406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.144 [2024-10-16 09:23:30.461230] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:06.402 Running I/O for 10 seconds... 00:09:08.275 6257.00 IOPS, 48.88 MiB/s [2024-10-16T09:23:33.616Z] 6268.50 IOPS, 48.97 MiB/s [2024-10-16T09:23:34.993Z] 6279.33 IOPS, 49.06 MiB/s [2024-10-16T09:23:35.956Z] 6301.75 IOPS, 49.23 MiB/s [2024-10-16T09:23:36.893Z] 6311.20 IOPS, 49.31 MiB/s [2024-10-16T09:23:37.828Z] 6320.17 IOPS, 49.38 MiB/s [2024-10-16T09:23:38.764Z] 6333.00 IOPS, 49.48 MiB/s [2024-10-16T09:23:39.698Z] 6400.12 IOPS, 50.00 MiB/s [2024-10-16T09:23:40.683Z] 6442.33 IOPS, 50.33 MiB/s [2024-10-16T09:23:40.683Z] 6482.40 IOPS, 50.64 MiB/s 00:09:16.279 Latency(us) 00:09:16.279 [2024-10-16T09:23:40.683Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:16.280 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:16.280 Verification LBA range: start 0x0 length 0x1000 00:09:16.280 Nvme1n1 : 10.01 6485.93 50.67 0.00 0.00 19673.89 2755.49 35508.60 00:09:16.280 [2024-10-16T09:23:40.684Z] =================================================================================================================== 00:09:16.280 [2024-10-16T09:23:40.684Z] Total : 6485.93 50.67 0.00 0.00 19673.89 2755.49 35508.60 00:09:16.539 09:23:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=65479 00:09:16.539 09:23:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:16.539 09:23:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:16.539 09:23:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:16.539 09:23:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:16.539 09:23:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:09:16.539 09:23:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:09:16.539 09:23:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:16.539 09:23:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:16.539 { 00:09:16.539 "params": { 00:09:16.539 "name": "Nvme$subsystem", 00:09:16.539 "trtype": "$TEST_TRANSPORT", 00:09:16.539 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:16.539 "adrfam": "ipv4", 00:09:16.539 "trsvcid": "$NVMF_PORT", 00:09:16.539 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:16.539 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:16.539 "hdgst": ${hdgst:-false}, 00:09:16.539 "ddgst": ${ddgst:-false} 00:09:16.539 }, 00:09:16.539 "method": "bdev_nvme_attach_controller" 00:09:16.539 } 00:09:16.539 EOF 00:09:16.539 )") 00:09:16.539 09:23:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:09:16.539 [2024-10-16 09:23:40.784444] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.539 [2024-10-16 09:23:40.784486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.539 09:23:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:09:16.539 09:23:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:09:16.539 09:23:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:16.539 "params": { 00:09:16.539 "name": "Nvme1", 00:09:16.539 "trtype": "tcp", 00:09:16.539 "traddr": "10.0.0.3", 00:09:16.539 "adrfam": "ipv4", 00:09:16.539 "trsvcid": "4420", 00:09:16.539 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:16.539 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:16.539 "hdgst": false, 00:09:16.539 "ddgst": false 00:09:16.539 }, 00:09:16.539 "method": "bdev_nvme_attach_controller" 00:09:16.539 }' 00:09:16.539 [2024-10-16 09:23:40.796407] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.539 [2024-10-16 09:23:40.796436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.539 [2024-10-16 09:23:40.808407] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.539 [2024-10-16 09:23:40.808432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.539 [2024-10-16 09:23:40.820418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.539 [2024-10-16 09:23:40.820442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.539 [2024-10-16 09:23:40.832407] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.539 [2024-10-16 09:23:40.832446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.539 [2024-10-16 09:23:40.836945] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:09:16.539 [2024-10-16 09:23:40.837036] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65479 ] 00:09:16.539 [2024-10-16 09:23:40.844429] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.539 [2024-10-16 09:23:40.844453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.539 [2024-10-16 09:23:40.856422] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.539 [2024-10-16 09:23:40.856447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.539 [2024-10-16 09:23:40.868421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.539 [2024-10-16 09:23:40.868459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.539 [2024-10-16 09:23:40.880434] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.539 [2024-10-16 09:23:40.880475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.539 [2024-10-16 09:23:40.892444] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.539 [2024-10-16 09:23:40.892467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.539 [2024-10-16 09:23:40.904441] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.539 [2024-10-16 09:23:40.904478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.539 [2024-10-16 09:23:40.916438] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.539 [2024-10-16 09:23:40.916477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.539 [2024-10-16 09:23:40.928443] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.539 [2024-10-16 09:23:40.928480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.539 [2024-10-16 09:23:40.940446] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.539 [2024-10-16 09:23:40.940483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.798 [2024-10-16 09:23:40.952454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.798 [2024-10-16 09:23:40.952478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.798 [2024-10-16 09:23:40.964458] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.798 [2024-10-16 09:23:40.964482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.798 [2024-10-16 09:23:40.976470] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.798 [2024-10-16 09:23:40.976494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.798 [2024-10-16 09:23:40.980459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.798 [2024-10-16 09:23:40.988476] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.798 [2024-10-16 09:23:40.988502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.798 [2024-10-16 09:23:40.996478] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.798 [2024-10-16 09:23:40.996504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.798 [2024-10-16 09:23:41.004476] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.798 [2024-10-16 09:23:41.004504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.798 [2024-10-16 09:23:41.016483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.798 [2024-10-16 09:23:41.016509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.798 [2024-10-16 09:23:41.028483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.798 [2024-10-16 09:23:41.028509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.798 [2024-10-16 09:23:41.040488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.798 [2024-10-16 09:23:41.040514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.798 [2024-10-16 09:23:41.042296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.798 [2024-10-16 09:23:41.052492] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.798 [2024-10-16 09:23:41.052519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.798 [2024-10-16 09:23:41.064504] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.798 [2024-10-16 09:23:41.064533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.798 [2024-10-16 09:23:41.076508] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.798 [2024-10-16 09:23:41.076537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.798 [2024-10-16 09:23:41.088510] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.798 [2024-10-16 09:23:41.088546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.798 [2024-10-16 09:23:41.100513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.798 [2024-10-16 09:23:41.100549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.798 [2024-10-16 09:23:41.110265] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:16.798 [2024-10-16 09:23:41.112511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.798 [2024-10-16 09:23:41.112536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.798 [2024-10-16 09:23:41.124516] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.798 [2024-10-16 09:23:41.124557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.798 [2024-10-16 09:23:41.136511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.798 [2024-10-16 09:23:41.136534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.798 [2024-10-16 09:23:41.148509] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.798 [2024-10-16 09:23:41.148532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.798 [2024-10-16 09:23:41.160514] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.798 [2024-10-16 09:23:41.160537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.798 [2024-10-16 09:23:41.172549] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.799 [2024-10-16 09:23:41.172586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.799 [2024-10-16 09:23:41.184568] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.799 [2024-10-16 09:23:41.184593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.799 [2024-10-16 09:23:41.196581] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.799 [2024-10-16 09:23:41.196607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.058 [2024-10-16 09:23:41.208588] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.058 [2024-10-16 09:23:41.208616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.058 [2024-10-16 09:23:41.220592] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.058 [2024-10-16 09:23:41.220620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.058 [2024-10-16 09:23:41.232601] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.058 [2024-10-16 09:23:41.232630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.058 Running I/O for 5 seconds... 00:09:17.058 [2024-10-16 09:23:41.249374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.058 [2024-10-16 09:23:41.249418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.058 [2024-10-16 09:23:41.258948] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.058 [2024-10-16 09:23:41.258978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.058 [2024-10-16 09:23:41.273807] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.058 [2024-10-16 09:23:41.273850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.058 [2024-10-16 09:23:41.284247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.058 [2024-10-16 09:23:41.284290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.058 [2024-10-16 09:23:41.299272] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.058 [2024-10-16 09:23:41.299318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.058 [2024-10-16 09:23:41.316102] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.058 [2024-10-16 09:23:41.316176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.058 [2024-10-16 09:23:41.332659] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.058 [2024-10-16 09:23:41.332718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.058 [2024-10-16 09:23:41.349019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.058 [2024-10-16 09:23:41.349065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.058 [2024-10-16 09:23:41.365605] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.058 [2024-10-16 09:23:41.365643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.058 [2024-10-16 09:23:41.381670] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.058 [2024-10-16 09:23:41.381712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.058 [2024-10-16 09:23:41.399294] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.058 [2024-10-16 09:23:41.399338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.058 [2024-10-16 09:23:41.414184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.058 [2024-10-16 09:23:41.414227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.058 [2024-10-16 09:23:41.430504] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.058 [2024-10-16 09:23:41.430547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.058 [2024-10-16 09:23:41.446524] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.058 [2024-10-16 09:23:41.446576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.058 [2024-10-16 09:23:41.456078] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.058 [2024-10-16 09:23:41.456139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.317 [2024-10-16 09:23:41.472527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.317 [2024-10-16 09:23:41.472567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.317 [2024-10-16 09:23:41.489265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.317 [2024-10-16 09:23:41.489308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.317 [2024-10-16 09:23:41.505397] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.317 [2024-10-16 09:23:41.505439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.317 [2024-10-16 09:23:41.524383] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.317 [2024-10-16 09:23:41.524414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.317 [2024-10-16 09:23:41.539612] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.317 [2024-10-16 09:23:41.539665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.317 [2024-10-16 09:23:41.556679] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.317 [2024-10-16 09:23:41.556703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.317 [2024-10-16 09:23:41.573513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.317 [2024-10-16 09:23:41.573566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.317 [2024-10-16 09:23:41.590438] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.317 [2024-10-16 09:23:41.590469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.317 [2024-10-16 09:23:41.605975] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.317 [2024-10-16 09:23:41.606008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.317 [2024-10-16 09:23:41.623444] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.317 [2024-10-16 09:23:41.623488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.317 [2024-10-16 09:23:41.639150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.317 [2024-10-16 09:23:41.639210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.317 [2024-10-16 09:23:41.648831] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.317 [2024-10-16 09:23:41.648861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.317 [2024-10-16 09:23:41.664695] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.317 [2024-10-16 09:23:41.664739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.317 [2024-10-16 09:23:41.682472] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.317 [2024-10-16 09:23:41.682515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.317 [2024-10-16 09:23:41.698614] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.317 [2024-10-16 09:23:41.698667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.317 [2024-10-16 09:23:41.707955] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.317 [2024-10-16 09:23:41.708000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.576 [2024-10-16 09:23:41.723458] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.576 [2024-10-16 09:23:41.723503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.576 [2024-10-16 09:23:41.739830] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.576 [2024-10-16 09:23:41.739876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.576 [2024-10-16 09:23:41.754154] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.576 [2024-10-16 09:23:41.754213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.576 [2024-10-16 09:23:41.770125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.576 [2024-10-16 09:23:41.770199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.576 [2024-10-16 09:23:41.787025] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.576 [2024-10-16 09:23:41.787071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.576 [2024-10-16 09:23:41.802886] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.576 [2024-10-16 09:23:41.802945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.576 [2024-10-16 09:23:41.819215] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.576 [2024-10-16 09:23:41.819259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.576 [2024-10-16 09:23:41.837293] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.576 [2024-10-16 09:23:41.837337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.576 [2024-10-16 09:23:41.852274] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.576 [2024-10-16 09:23:41.852317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.576 [2024-10-16 09:23:41.871322] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.576 [2024-10-16 09:23:41.871366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.576 [2024-10-16 09:23:41.886986] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.576 [2024-10-16 09:23:41.887023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.576 [2024-10-16 09:23:41.904506] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.576 [2024-10-16 09:23:41.904559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.576 [2024-10-16 09:23:41.919421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.576 [2024-10-16 09:23:41.919452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.576 [2024-10-16 09:23:41.934898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.576 [2024-10-16 09:23:41.934929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.576 [2024-10-16 09:23:41.953100] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.576 [2024-10-16 09:23:41.953956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.576 [2024-10-16 09:23:41.969085] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.576 [2024-10-16 09:23:41.969295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.836 [2024-10-16 09:23:41.985842] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.836 [2024-10-16 09:23:41.985874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.836 [2024-10-16 09:23:42.004623] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.836 [2024-10-16 09:23:42.004659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.836 [2024-10-16 09:23:42.019879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.836 [2024-10-16 09:23:42.019913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.836 [2024-10-16 09:23:42.037355] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.836 [2024-10-16 09:23:42.037387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.836 [2024-10-16 09:23:42.053397] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.836 [2024-10-16 09:23:42.053431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.836 [2024-10-16 09:23:42.071963] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.836 [2024-10-16 09:23:42.072000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.836 [2024-10-16 09:23:42.087033] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.836 [2024-10-16 09:23:42.087266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.836 [2024-10-16 09:23:42.103884] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.836 [2024-10-16 09:23:42.103920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.836 [2024-10-16 09:23:42.119607] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.836 [2024-10-16 09:23:42.119654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.836 [2024-10-16 09:23:42.136902] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.836 [2024-10-16 09:23:42.136938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.836 [2024-10-16 09:23:42.155020] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.836 [2024-10-16 09:23:42.155051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.836 [2024-10-16 09:23:42.170645] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.836 [2024-10-16 09:23:42.170676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.836 [2024-10-16 09:23:42.188288] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.836 [2024-10-16 09:23:42.188331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.836 [2024-10-16 09:23:42.204589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.836 [2024-10-16 09:23:42.204617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.836 [2024-10-16 09:23:42.222687] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.836 [2024-10-16 09:23:42.222718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.836 [2024-10-16 09:23:42.238608] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.836 [2024-10-16 09:23:42.238671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.095 11645.00 IOPS, 90.98 MiB/s [2024-10-16T09:23:42.499Z] [2024-10-16 09:23:42.254374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.095 [2024-10-16 09:23:42.254597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.095 [2024-10-16 09:23:42.264091] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.095 [2024-10-16 09:23:42.264286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.095 [2024-10-16 09:23:42.280415] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.095 [2024-10-16 09:23:42.280567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.095 [2024-10-16 09:23:42.297425] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.095 [2024-10-16 09:23:42.297631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.095 [2024-10-16 09:23:42.314448] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.095 [2024-10-16 09:23:42.314633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.095 [2024-10-16 09:23:42.330191] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.095 [2024-10-16 09:23:42.330351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.095 [2024-10-16 09:23:42.340493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.095 [2024-10-16 09:23:42.340661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.095 [2024-10-16 09:23:42.355571] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.095 [2024-10-16 09:23:42.355790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.095 [2024-10-16 09:23:42.371760] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.095 [2024-10-16 09:23:42.371917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.095 [2024-10-16 09:23:42.392152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.095 [2024-10-16 09:23:42.392358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.095 [2024-10-16 09:23:42.407487] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.095 [2024-10-16 09:23:42.407692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.095 [2024-10-16 09:23:42.423940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.095 [2024-10-16 09:23:42.423977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.095 [2024-10-16 09:23:42.441317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.095 [2024-10-16 09:23:42.441350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.096 [2024-10-16 09:23:42.457783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.096 [2024-10-16 09:23:42.457818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.096 [2024-10-16 09:23:42.473257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.096 [2024-10-16 09:23:42.473289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.096 [2024-10-16 09:23:42.482840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.096 [2024-10-16 09:23:42.482875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.096 [2024-10-16 09:23:42.498886] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.096 [2024-10-16 09:23:42.498922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.355 [2024-10-16 09:23:42.514970] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.355 [2024-10-16 09:23:42.515004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.355 [2024-10-16 09:23:42.534112] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.355 [2024-10-16 09:23:42.534312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.355 [2024-10-16 09:23:42.549376] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.355 [2024-10-16 09:23:42.549564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.355 [2024-10-16 09:23:42.559519] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.355 [2024-10-16 09:23:42.559579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.355 [2024-10-16 09:23:42.573826] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.355 [2024-10-16 09:23:42.573858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.355 [2024-10-16 09:23:42.588163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.355 [2024-10-16 09:23:42.588193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.355 [2024-10-16 09:23:42.604699] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.355 [2024-10-16 09:23:42.604731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.355 [2024-10-16 09:23:42.621564] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.355 [2024-10-16 09:23:42.621623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.355 [2024-10-16 09:23:42.636761] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.355 [2024-10-16 09:23:42.636796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.355 [2024-10-16 09:23:42.646869] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.355 [2024-10-16 09:23:42.646906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.355 [2024-10-16 09:23:42.662415] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.355 [2024-10-16 09:23:42.662602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.355 [2024-10-16 09:23:42.679206] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.355 [2024-10-16 09:23:42.679239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.355 [2024-10-16 09:23:42.696211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.355 [2024-10-16 09:23:42.696243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.355 [2024-10-16 09:23:42.712544] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.355 [2024-10-16 09:23:42.712587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.355 [2024-10-16 09:23:42.729189] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.355 [2024-10-16 09:23:42.729346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.355 [2024-10-16 09:23:42.745235] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.355 [2024-10-16 09:23:42.745268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.615 [2024-10-16 09:23:42.763827] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.615 [2024-10-16 09:23:42.763863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.615 [2024-10-16 09:23:42.779291] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.615 [2024-10-16 09:23:42.779448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.615 [2024-10-16 09:23:42.795893] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.615 [2024-10-16 09:23:42.795927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.615 [2024-10-16 09:23:42.811903] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.615 [2024-10-16 09:23:42.811965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.615 [2024-10-16 09:23:42.830322] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.615 [2024-10-16 09:23:42.830353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.615 [2024-10-16 09:23:42.845716] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.615 [2024-10-16 09:23:42.845779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.615 [2024-10-16 09:23:42.865549] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.615 [2024-10-16 09:23:42.865758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.615 [2024-10-16 09:23:42.880886] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.615 [2024-10-16 09:23:42.881056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.615 [2024-10-16 09:23:42.897456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.615 [2024-10-16 09:23:42.897488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.615 [2024-10-16 09:23:42.914002] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.615 [2024-10-16 09:23:42.914038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.615 [2024-10-16 09:23:42.930494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.615 [2024-10-16 09:23:42.930529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.615 [2024-10-16 09:23:42.947731] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.615 [2024-10-16 09:23:42.947783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.615 [2024-10-16 09:23:42.963209] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.615 [2024-10-16 09:23:42.963387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.615 [2024-10-16 09:23:42.972941] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.615 [2024-10-16 09:23:42.972977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.615 [2024-10-16 09:23:42.989082] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.615 [2024-10-16 09:23:42.989132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.615 [2024-10-16 09:23:43.006518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.615 [2024-10-16 09:23:43.006741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.875 [2024-10-16 09:23:43.022083] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.875 [2024-10-16 09:23:43.022271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.875 [2024-10-16 09:23:43.040058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.875 [2024-10-16 09:23:43.040092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.875 [2024-10-16 09:23:43.053670] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.875 [2024-10-16 09:23:43.053873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.875 [2024-10-16 09:23:43.069536] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.875 [2024-10-16 09:23:43.069767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.875 [2024-10-16 09:23:43.086291] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.875 [2024-10-16 09:23:43.086323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.875 [2024-10-16 09:23:43.104591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.875 [2024-10-16 09:23:43.104626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.875 [2024-10-16 09:23:43.119354] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.875 [2024-10-16 09:23:43.119387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.875 [2024-10-16 09:23:43.135794] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.875 [2024-10-16 09:23:43.135829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.875 [2024-10-16 09:23:43.151938] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.875 [2024-10-16 09:23:43.151975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.875 [2024-10-16 09:23:43.170038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.875 [2024-10-16 09:23:43.170089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.875 [2024-10-16 09:23:43.185516] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.875 [2024-10-16 09:23:43.185593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.875 [2024-10-16 09:23:43.195021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.875 [2024-10-16 09:23:43.195240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.875 [2024-10-16 09:23:43.211609] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.875 [2024-10-16 09:23:43.211670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.875 [2024-10-16 09:23:43.228401] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.875 [2024-10-16 09:23:43.228438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.875 11573.00 IOPS, 90.41 MiB/s [2024-10-16T09:23:43.279Z] [2024-10-16 09:23:43.245751] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.875 [2024-10-16 09:23:43.245782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.875 [2024-10-16 09:23:43.260636] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.875 [2024-10-16 09:23:43.260686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.875 [2024-10-16 09:23:43.276164] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.875 [2024-10-16 09:23:43.276359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.135 [2024-10-16 09:23:43.286165] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.135 [2024-10-16 09:23:43.286196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.135 [2024-10-16 09:23:43.298482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.135 [2024-10-16 09:23:43.298514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.135 [2024-10-16 09:23:43.313430] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.135 [2024-10-16 09:23:43.313463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.135 [2024-10-16 09:23:43.329593] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.135 [2024-10-16 09:23:43.329651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.135 [2024-10-16 09:23:43.346608] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.135 [2024-10-16 09:23:43.346666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.135 [2024-10-16 09:23:43.363332] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.135 [2024-10-16 09:23:43.363501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.135 [2024-10-16 09:23:43.379535] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.135 [2024-10-16 09:23:43.379606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.135 [2024-10-16 09:23:43.389196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.135 [2024-10-16 09:23:43.389228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.135 [2024-10-16 09:23:43.403971] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.135 [2024-10-16 09:23:43.404008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.135 [2024-10-16 09:23:43.419383] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.135 [2024-10-16 09:23:43.419416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.135 [2024-10-16 09:23:43.428892] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.135 [2024-10-16 09:23:43.428924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.135 [2024-10-16 09:23:43.444118] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.135 [2024-10-16 09:23:43.444276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.135 [2024-10-16 09:23:43.459119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.135 [2024-10-16 09:23:43.459275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.135 [2024-10-16 09:23:43.469641] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.135 [2024-10-16 09:23:43.469687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.135 [2024-10-16 09:23:43.484257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.135 [2024-10-16 09:23:43.484291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.135 [2024-10-16 09:23:43.494494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.135 [2024-10-16 09:23:43.494528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.135 [2024-10-16 09:23:43.506800] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.135 [2024-10-16 09:23:43.506834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.135 [2024-10-16 09:23:43.523410] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.135 [2024-10-16 09:23:43.523443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.135 [2024-10-16 09:23:43.539027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.135 [2024-10-16 09:23:43.539063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.394 [2024-10-16 09:23:43.549172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.394 [2024-10-16 09:23:43.549335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.394 [2024-10-16 09:23:43.566109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.394 [2024-10-16 09:23:43.566175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.394 [2024-10-16 09:23:43.575838] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.394 [2024-10-16 09:23:43.575874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.394 [2024-10-16 09:23:43.590668] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.394 [2024-10-16 09:23:43.590699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.394 [2024-10-16 09:23:43.606809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.394 [2024-10-16 09:23:43.606843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.394 [2024-10-16 09:23:43.624734] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.394 [2024-10-16 09:23:43.624791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.394 [2024-10-16 09:23:43.640663] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.394 [2024-10-16 09:23:43.640710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.394 [2024-10-16 09:23:43.658700] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.394 [2024-10-16 09:23:43.658732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.394 [2024-10-16 09:23:43.673979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.394 [2024-10-16 09:23:43.674188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.394 [2024-10-16 09:23:43.693189] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.394 [2024-10-16 09:23:43.693337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.394 [2024-10-16 09:23:43.707840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.394 [2024-10-16 09:23:43.707988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.394 [2024-10-16 09:23:43.724186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.394 [2024-10-16 09:23:43.724220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.394 [2024-10-16 09:23:43.741138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.394 [2024-10-16 09:23:43.741172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.395 [2024-10-16 09:23:43.756909] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.395 [2024-10-16 09:23:43.756945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.395 [2024-10-16 09:23:43.766967] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.395 [2024-10-16 09:23:43.767118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.395 [2024-10-16 09:23:43.782406] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.395 [2024-10-16 09:23:43.782441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.395 [2024-10-16 09:23:43.798768] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.395 [2024-10-16 09:23:43.798805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.654 [2024-10-16 09:23:43.808155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.654 [2024-10-16 09:23:43.808189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.654 [2024-10-16 09:23:43.823696] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.654 [2024-10-16 09:23:43.823729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.654 [2024-10-16 09:23:43.834413] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.654 [2024-10-16 09:23:43.834446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.654 [2024-10-16 09:23:43.849650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.654 [2024-10-16 09:23:43.849703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.654 [2024-10-16 09:23:43.866237] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.654 [2024-10-16 09:23:43.866270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.654 [2024-10-16 09:23:43.883215] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.654 [2024-10-16 09:23:43.883248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.654 [2024-10-16 09:23:43.898257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.654 [2024-10-16 09:23:43.898290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.654 [2024-10-16 09:23:43.916704] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.654 [2024-10-16 09:23:43.916738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.654 [2024-10-16 09:23:43.932178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.654 [2024-10-16 09:23:43.932360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.654 [2024-10-16 09:23:43.949017] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.654 [2024-10-16 09:23:43.949054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.654 [2024-10-16 09:23:43.966565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.654 [2024-10-16 09:23:43.966606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.654 [2024-10-16 09:23:43.981613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.654 [2024-10-16 09:23:43.981674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.654 [2024-10-16 09:23:43.999611] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.654 [2024-10-16 09:23:43.999827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.654 [2024-10-16 09:23:44.014821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.654 [2024-10-16 09:23:44.014966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.654 [2024-10-16 09:23:44.030466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.654 [2024-10-16 09:23:44.030674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.654 [2024-10-16 09:23:44.048758] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.654 [2024-10-16 09:23:44.048907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.914 [2024-10-16 09:23:44.064319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.914 [2024-10-16 09:23:44.064490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.914 [2024-10-16 09:23:44.081594] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.914 [2024-10-16 09:23:44.081792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.914 [2024-10-16 09:23:44.098194] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.914 [2024-10-16 09:23:44.098352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.914 [2024-10-16 09:23:44.114524] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.914 [2024-10-16 09:23:44.114732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.914 [2024-10-16 09:23:44.131304] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.914 [2024-10-16 09:23:44.131454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.914 [2024-10-16 09:23:44.147997] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.914 [2024-10-16 09:23:44.148155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.914 [2024-10-16 09:23:44.164834] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.914 [2024-10-16 09:23:44.165016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.914 [2024-10-16 09:23:44.181466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.914 [2024-10-16 09:23:44.181655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.914 [2024-10-16 09:23:44.198505] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.914 [2024-10-16 09:23:44.198697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.914 [2024-10-16 09:23:44.215644] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.914 [2024-10-16 09:23:44.215802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.914 [2024-10-16 09:23:44.231692] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.914 [2024-10-16 09:23:44.231862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.914 11514.00 IOPS, 89.95 MiB/s [2024-10-16T09:23:44.318Z] [2024-10-16 09:23:44.248178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.914 [2024-10-16 09:23:44.248353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.914 [2024-10-16 09:23:44.265212] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.914 [2024-10-16 09:23:44.265359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.914 [2024-10-16 09:23:44.281115] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.914 [2024-10-16 09:23:44.281302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.914 [2024-10-16 09:23:44.297756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.914 [2024-10-16 09:23:44.297915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.174 [2024-10-16 09:23:44.322919] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.174 [2024-10-16 09:23:44.323083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.174 [2024-10-16 09:23:44.339946] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.174 [2024-10-16 09:23:44.340088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.174 [2024-10-16 09:23:44.356383] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.174 [2024-10-16 09:23:44.356530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.174 [2024-10-16 09:23:44.374152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.174 [2024-10-16 09:23:44.374298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.174 [2024-10-16 09:23:44.388836] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.174 [2024-10-16 09:23:44.389019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.174 [2024-10-16 09:23:44.406226] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.174 [2024-10-16 09:23:44.406390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.174 [2024-10-16 09:23:44.422185] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.174 [2024-10-16 09:23:44.422348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.174 [2024-10-16 09:23:44.437852] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.174 [2024-10-16 09:23:44.438042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.174 [2024-10-16 09:23:44.454311] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.174 [2024-10-16 09:23:44.454473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.174 [2024-10-16 09:23:44.470207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.174 [2024-10-16 09:23:44.470361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.174 [2024-10-16 09:23:44.486366] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.174 [2024-10-16 09:23:44.486531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.174 [2024-10-16 09:23:44.502009] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.174 [2024-10-16 09:23:44.502235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.174 [2024-10-16 09:23:44.511565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.174 [2024-10-16 09:23:44.511777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.174 [2024-10-16 09:23:44.528191] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.174 [2024-10-16 09:23:44.528405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.174 [2024-10-16 09:23:44.543858] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.174 [2024-10-16 09:23:44.544044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.174 [2024-10-16 09:23:44.553276] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.174 [2024-10-16 09:23:44.553452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.174 [2024-10-16 09:23:44.569343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.174 [2024-10-16 09:23:44.569497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.433 [2024-10-16 09:23:44.585670] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.433 [2024-10-16 09:23:44.585898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.434 [2024-10-16 09:23:44.602575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.434 [2024-10-16 09:23:44.602778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.434 [2024-10-16 09:23:44.619572] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.434 [2024-10-16 09:23:44.619757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.434 [2024-10-16 09:23:44.634989] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.434 [2024-10-16 09:23:44.635166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.434 [2024-10-16 09:23:44.650534] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.434 [2024-10-16 09:23:44.650595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.434 [2024-10-16 09:23:44.660250] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.434 [2024-10-16 09:23:44.660318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.434 [2024-10-16 09:23:44.676788] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.434 [2024-10-16 09:23:44.676824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.434 [2024-10-16 09:23:44.692703] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.434 [2024-10-16 09:23:44.692735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.434 [2024-10-16 09:23:44.702103] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.434 [2024-10-16 09:23:44.702151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.434 [2024-10-16 09:23:44.717996] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.434 [2024-10-16 09:23:44.718029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.434 [2024-10-16 09:23:44.734515] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.434 [2024-10-16 09:23:44.734591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.434 [2024-10-16 09:23:44.750869] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.434 [2024-10-16 09:23:44.751045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.434 [2024-10-16 09:23:44.767182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.434 [2024-10-16 09:23:44.767215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.434 [2024-10-16 09:23:44.784148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.434 [2024-10-16 09:23:44.784181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.434 [2024-10-16 09:23:44.801350] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.434 [2024-10-16 09:23:44.801513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.434 [2024-10-16 09:23:44.817490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.434 [2024-10-16 09:23:44.817522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.434 [2024-10-16 09:23:44.834646] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.434 [2024-10-16 09:23:44.834711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.719 [2024-10-16 09:23:44.850078] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.719 [2024-10-16 09:23:44.850112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.719 [2024-10-16 09:23:44.865931] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.719 [2024-10-16 09:23:44.865963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.719 [2024-10-16 09:23:44.882869] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.719 [2024-10-16 09:23:44.883043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.719 [2024-10-16 09:23:44.899789] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.719 [2024-10-16 09:23:44.899821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.719 [2024-10-16 09:23:44.917382] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.719 [2024-10-16 09:23:44.917414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.719 [2024-10-16 09:23:44.933235] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.719 [2024-10-16 09:23:44.933386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.719 [2024-10-16 09:23:44.950328] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.719 [2024-10-16 09:23:44.950368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.719 [2024-10-16 09:23:44.965941] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.719 [2024-10-16 09:23:44.965973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.719 [2024-10-16 09:23:44.977060] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.719 [2024-10-16 09:23:44.977090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.720 [2024-10-16 09:23:44.991822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.720 [2024-10-16 09:23:44.991995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.720 [2024-10-16 09:23:45.008977] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.720 [2024-10-16 09:23:45.009009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.720 [2024-10-16 09:23:45.022578] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.720 [2024-10-16 09:23:45.022606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.720 [2024-10-16 09:23:45.038696] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.720 [2024-10-16 09:23:45.038729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.720 [2024-10-16 09:23:45.054758] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.720 [2024-10-16 09:23:45.054791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.720 [2024-10-16 09:23:45.073771] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.720 [2024-10-16 09:23:45.073808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.720 [2024-10-16 09:23:45.088197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.720 [2024-10-16 09:23:45.088228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.991 [2024-10-16 09:23:45.103782] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.991 [2024-10-16 09:23:45.103816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.991 [2024-10-16 09:23:45.121283] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.991 [2024-10-16 09:23:45.121446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.991 [2024-10-16 09:23:45.137787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.991 [2024-10-16 09:23:45.137941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.991 [2024-10-16 09:23:45.153994] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.991 [2024-10-16 09:23:45.154150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.991 [2024-10-16 09:23:45.172027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.991 [2024-10-16 09:23:45.172180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.991 [2024-10-16 09:23:45.186837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.991 [2024-10-16 09:23:45.186982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.991 [2024-10-16 09:23:45.203082] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.991 [2024-10-16 09:23:45.203272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.991 [2024-10-16 09:23:45.220094] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.991 [2024-10-16 09:23:45.220267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.991 [2024-10-16 09:23:45.236384] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.991 [2024-10-16 09:23:45.236531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.991 11617.50 IOPS, 90.76 MiB/s [2024-10-16T09:23:45.395Z] [2024-10-16 09:23:45.254463] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.991 [2024-10-16 09:23:45.254667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.991 [2024-10-16 09:23:45.269626] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.991 [2024-10-16 09:23:45.269837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.991 [2024-10-16 09:23:45.278658] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.991 [2024-10-16 09:23:45.278812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.991 [2024-10-16 09:23:45.294956] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.991 [2024-10-16 09:23:45.295109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.991 [2024-10-16 09:23:45.313632] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.991 [2024-10-16 09:23:45.313792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.991 [2024-10-16 09:23:45.328532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.991 [2024-10-16 09:23:45.328697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.991 [2024-10-16 09:23:45.338240] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.991 [2024-10-16 09:23:45.338397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.991 [2024-10-16 09:23:45.354647] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.991 [2024-10-16 09:23:45.354808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.991 [2024-10-16 09:23:45.373755] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.991 [2024-10-16 09:23:45.373933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.991 [2024-10-16 09:23:45.387620] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.991 [2024-10-16 09:23:45.387778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.250 [2024-10-16 09:23:45.403040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.250 [2024-10-16 09:23:45.403193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.250 [2024-10-16 09:23:45.414117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.250 [2024-10-16 09:23:45.414269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.250 [2024-10-16 09:23:45.430702] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.250 [2024-10-16 09:23:45.430734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.250 [2024-10-16 09:23:45.446761] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.250 [2024-10-16 09:23:45.446792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.250 [2024-10-16 09:23:45.464468] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.250 [2024-10-16 09:23:45.464505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.250 [2024-10-16 09:23:45.481276] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.250 [2024-10-16 09:23:45.481434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.250 [2024-10-16 09:23:45.498149] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.250 [2024-10-16 09:23:45.498302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.250 [2024-10-16 09:23:45.515119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.250 [2024-10-16 09:23:45.515274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.250 [2024-10-16 09:23:45.530696] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.250 [2024-10-16 09:23:45.530844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.250 [2024-10-16 09:23:45.548260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.250 [2024-10-16 09:23:45.548457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.250 [2024-10-16 09:23:45.562872] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.250 [2024-10-16 09:23:45.563044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.250 [2024-10-16 09:23:45.579225] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.250 [2024-10-16 09:23:45.579379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.250 [2024-10-16 09:23:45.595698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.250 [2024-10-16 09:23:45.595856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.250 [2024-10-16 09:23:45.612562] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.250 [2024-10-16 09:23:45.612759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.250 [2024-10-16 09:23:45.629194] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.250 [2024-10-16 09:23:45.629347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.250 [2024-10-16 09:23:45.645812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.250 [2024-10-16 09:23:45.645973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.510 [2024-10-16 09:23:45.662184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.510 [2024-10-16 09:23:45.662339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.510 [2024-10-16 09:23:45.679456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.510 [2024-10-16 09:23:45.679642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.510 [2024-10-16 09:23:45.695291] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.510 [2024-10-16 09:23:45.695444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.510 [2024-10-16 09:23:45.713283] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.510 [2024-10-16 09:23:45.713437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.510 [2024-10-16 09:23:45.727827] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.510 [2024-10-16 09:23:45.727997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.510 [2024-10-16 09:23:45.744583] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.510 [2024-10-16 09:23:45.744806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.510 [2024-10-16 09:23:45.759234] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.510 [2024-10-16 09:23:45.759389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.510 [2024-10-16 09:23:45.774026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.510 [2024-10-16 09:23:45.774182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.510 [2024-10-16 09:23:45.790647] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.510 [2024-10-16 09:23:45.790805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.510 [2024-10-16 09:23:45.806149] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.510 [2024-10-16 09:23:45.806180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.510 [2024-10-16 09:23:45.823984] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.510 [2024-10-16 09:23:45.824015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.510 [2024-10-16 09:23:45.840250] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.510 [2024-10-16 09:23:45.840281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.510 [2024-10-16 09:23:45.857943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.510 [2024-10-16 09:23:45.857973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.510 [2024-10-16 09:23:45.874106] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.510 [2024-10-16 09:23:45.874137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.510 [2024-10-16 09:23:45.891640] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.510 [2024-10-16 09:23:45.891669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.510 [2024-10-16 09:23:45.907321] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.510 [2024-10-16 09:23:45.907352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.770 [2024-10-16 09:23:45.921924] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.770 [2024-10-16 09:23:45.921986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.770 [2024-10-16 09:23:45.938889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.770 [2024-10-16 09:23:45.938936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.770 [2024-10-16 09:23:45.954348] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.770 [2024-10-16 09:23:45.954379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.770 [2024-10-16 09:23:45.971871] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.770 [2024-10-16 09:23:45.971905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.770 [2024-10-16 09:23:45.987717] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.770 [2024-10-16 09:23:45.987750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.770 [2024-10-16 09:23:46.002594] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.770 [2024-10-16 09:23:46.002625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.770 [2024-10-16 09:23:46.017522] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.770 [2024-10-16 09:23:46.017730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.770 [2024-10-16 09:23:46.035429] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.770 [2024-10-16 09:23:46.035590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.770 [2024-10-16 09:23:46.051131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.770 [2024-10-16 09:23:46.051268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.770 [2024-10-16 09:23:46.068378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.770 [2024-10-16 09:23:46.068536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.770 [2024-10-16 09:23:46.085280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.770 [2024-10-16 09:23:46.085426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.770 [2024-10-16 09:23:46.103122] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.770 [2024-10-16 09:23:46.103265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.770 [2024-10-16 09:23:46.117319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.770 [2024-10-16 09:23:46.117456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.770 [2024-10-16 09:23:46.132891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.770 [2024-10-16 09:23:46.133028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.770 [2024-10-16 09:23:46.150925] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.770 [2024-10-16 09:23:46.151094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.770 [2024-10-16 09:23:46.165689] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.770 [2024-10-16 09:23:46.165836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.030 [2024-10-16 09:23:46.180418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.030 [2024-10-16 09:23:46.180613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.030 [2024-10-16 09:23:46.191059] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.030 [2024-10-16 09:23:46.191196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.030 [2024-10-16 09:23:46.206827] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.030 [2024-10-16 09:23:46.206982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.030 [2024-10-16 09:23:46.222941] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.030 [2024-10-16 09:23:46.223078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.030 11955.20 IOPS, 93.40 MiB/s [2024-10-16T09:23:46.434Z] [2024-10-16 09:23:46.240972] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.030 [2024-10-16 09:23:46.241111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.030 00:09:22.030 Latency(us) 00:09:22.030 [2024-10-16T09:23:46.434Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:22.030 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:22.030 Nvme1n1 : 5.01 11958.15 93.42 0.00 0.00 10692.17 4051.32 18230.92 00:09:22.030 [2024-10-16T09:23:46.434Z] =================================================================================================================== 00:09:22.030 [2024-10-16T09:23:46.434Z] Total : 11958.15 93.42 0.00 0.00 10692.17 4051.32 18230.92 00:09:22.030 [2024-10-16 09:23:46.251945] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.030 [2024-10-16 09:23:46.252118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.030 [2024-10-16 09:23:46.263940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.030 [2024-10-16 09:23:46.264076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.030 [2024-10-16 09:23:46.275950] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.030 [2024-10-16 09:23:46.276114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.030 [2024-10-16 09:23:46.287973] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.030 [2024-10-16 09:23:46.288172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.031 [2024-10-16 09:23:46.299990] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.031 [2024-10-16 09:23:46.300189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.031 [2024-10-16 09:23:46.311993] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.031 [2024-10-16 09:23:46.312213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.031 [2024-10-16 09:23:46.324002] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.031 [2024-10-16 09:23:46.324230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.031 [2024-10-16 09:23:46.336007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.031 [2024-10-16 09:23:46.336233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.031 [2024-10-16 09:23:46.348000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.031 [2024-10-16 09:23:46.348218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.031 [2024-10-16 09:23:46.360023] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.031 [2024-10-16 09:23:46.360057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.031 [2024-10-16 09:23:46.372025] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.031 [2024-10-16 09:23:46.372060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.031 [2024-10-16 09:23:46.384027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.031 [2024-10-16 09:23:46.384057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.031 [2024-10-16 09:23:46.396038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.031 [2024-10-16 09:23:46.396083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.031 [2024-10-16 09:23:46.408052] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.031 [2024-10-16 09:23:46.408081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.031 [2024-10-16 09:23:46.420078] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.031 [2024-10-16 09:23:46.420112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.031 [2024-10-16 09:23:46.432075] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.031 [2024-10-16 09:23:46.432101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.290 [2024-10-16 09:23:46.448058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.290 [2024-10-16 09:23:46.448082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.290 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (65479) - No such process 00:09:22.290 09:23:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 65479 00:09:22.290 09:23:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:22.290 09:23:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.290 09:23:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:22.290 09:23:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.290 09:23:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:22.290 09:23:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.290 09:23:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:22.290 delay0 00:09:22.290 09:23:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.290 09:23:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:22.290 09:23:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.290 09:23:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:22.290 09:23:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.290 09:23:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:09:22.290 [2024-10-16 09:23:46.647986] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:28.855 Initializing NVMe Controllers 00:09:28.855 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:09:28.855 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:28.855 Initialization complete. Launching workers. 00:09:28.855 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 780 00:09:28.855 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1067, failed to submit 33 00:09:28.855 success 937, unsuccessful 130, failed 0 00:09:28.855 09:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:28.855 09:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:28.855 09:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:28.855 09:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:28.855 09:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:28.855 09:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:28.855 09:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:28.855 09:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:28.855 rmmod nvme_tcp 00:09:28.855 rmmod nvme_fabrics 00:09:28.855 rmmod nvme_keyring 00:09:28.855 09:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:28.855 09:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:28.855 09:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:28.855 09:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 65336 ']' 00:09:28.855 09:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 65336 00:09:28.855 09:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 65336 ']' 00:09:28.855 09:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 65336 00:09:28.855 09:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:09:28.855 09:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:28.855 09:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65336 00:09:28.855 killing process with pid 65336 00:09:28.855 09:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:28.855 09:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:28.855 09:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65336' 00:09:28.855 09:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 65336 00:09:28.855 09:23:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 65336 00:09:28.855 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:28.855 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:28.855 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:28.855 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:28.855 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:28.855 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:09:28.855 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:09:28.855 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:28.855 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:28.855 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:28.855 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:28.855 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:28.855 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:28.855 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:28.855 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:28.855 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:28.855 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:28.855 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:28.855 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:29.114 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:29.114 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:29.114 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:29.114 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:29.114 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.114 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:29.114 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.114 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:09:29.114 00:09:29.114 real 0m24.272s 00:09:29.114 user 0m39.722s 00:09:29.114 sys 0m6.838s 00:09:29.114 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:29.114 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:29.114 ************************************ 00:09:29.114 END TEST nvmf_zcopy 00:09:29.114 ************************************ 00:09:29.114 09:23:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:29.114 09:23:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:29.114 09:23:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:29.114 09:23:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:29.114 ************************************ 00:09:29.114 START TEST nvmf_nmic 00:09:29.114 ************************************ 00:09:29.114 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:29.114 * Looking for test storage... 00:09:29.114 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:29.114 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:29.114 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:09:29.114 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:29.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.375 --rc genhtml_branch_coverage=1 00:09:29.375 --rc genhtml_function_coverage=1 00:09:29.375 --rc genhtml_legend=1 00:09:29.375 --rc geninfo_all_blocks=1 00:09:29.375 --rc geninfo_unexecuted_blocks=1 00:09:29.375 00:09:29.375 ' 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:29.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.375 --rc genhtml_branch_coverage=1 00:09:29.375 --rc genhtml_function_coverage=1 00:09:29.375 --rc genhtml_legend=1 00:09:29.375 --rc geninfo_all_blocks=1 00:09:29.375 --rc geninfo_unexecuted_blocks=1 00:09:29.375 00:09:29.375 ' 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:29.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.375 --rc genhtml_branch_coverage=1 00:09:29.375 --rc genhtml_function_coverage=1 00:09:29.375 --rc genhtml_legend=1 00:09:29.375 --rc geninfo_all_blocks=1 00:09:29.375 --rc geninfo_unexecuted_blocks=1 00:09:29.375 00:09:29.375 ' 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:29.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.375 --rc genhtml_branch_coverage=1 00:09:29.375 --rc genhtml_function_coverage=1 00:09:29.375 --rc genhtml_legend=1 00:09:29.375 --rc geninfo_all_blocks=1 00:09:29.375 --rc geninfo_unexecuted_blocks=1 00:09:29.375 00:09:29.375 ' 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5989d9e2-d339-420e-a2f4-bd87604f111f 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.375 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:29.376 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@458 -- # nvmf_veth_init 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:29.376 Cannot find device "nvmf_init_br" 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:29.376 Cannot find device "nvmf_init_br2" 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:29.376 Cannot find device "nvmf_tgt_br" 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:29.376 Cannot find device "nvmf_tgt_br2" 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:29.376 Cannot find device "nvmf_init_br" 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:29.376 Cannot find device "nvmf_init_br2" 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:29.376 Cannot find device "nvmf_tgt_br" 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:29.376 Cannot find device "nvmf_tgt_br2" 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:29.376 Cannot find device "nvmf_br" 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:29.376 Cannot find device "nvmf_init_if" 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:29.376 Cannot find device "nvmf_init_if2" 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:29.376 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:29.376 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:29.376 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:29.635 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:29.635 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:29.635 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:29.635 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:29.635 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:29.636 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:29.636 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:29.636 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:29.636 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:29.636 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:29.636 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:29.636 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:29.636 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:29.636 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:29.636 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:29.636 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:29.636 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:29.636 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:29.636 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:29.636 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:29.636 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:29.636 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:29.636 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:29.636 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:29.636 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:29.636 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:29.636 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:29.636 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:29.636 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:09:29.636 00:09:29.636 --- 10.0.0.3 ping statistics --- 00:09:29.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.636 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:09:29.636 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:29.636 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:29.636 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:09:29.636 00:09:29.636 --- 10.0.0.4 ping statistics --- 00:09:29.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.636 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:09:29.636 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:29.636 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:29.636 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:09:29.636 00:09:29.636 --- 10.0.0.1 ping statistics --- 00:09:29.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.636 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:09:29.636 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:29.636 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:29.636 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:09:29.636 00:09:29.636 --- 10.0.0.2 ping statistics --- 00:09:29.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.636 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:09:29.636 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:29.636 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # return 0 00:09:29.636 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:29.636 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:29.636 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:29.636 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:29.636 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:29.636 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:29.636 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:29.636 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:29.636 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:29.636 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:29.636 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:29.636 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=65869 00:09:29.636 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 65869 00:09:29.636 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:29.636 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 65869 ']' 00:09:29.636 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.636 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:29.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.636 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.636 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:29.636 09:23:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:29.895 [2024-10-16 09:23:54.066621] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:09:29.895 [2024-10-16 09:23:54.066727] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:29.895 [2024-10-16 09:23:54.209285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:29.895 [2024-10-16 09:23:54.268887] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:29.895 [2024-10-16 09:23:54.268945] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:29.895 [2024-10-16 09:23:54.268959] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:29.895 [2024-10-16 09:23:54.268970] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:29.895 [2024-10-16 09:23:54.268979] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:29.895 [2024-10-16 09:23:54.270241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:29.895 [2024-10-16 09:23:54.270297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:29.895 [2024-10-16 09:23:54.270449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:29.895 [2024-10-16 09:23:54.270453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.154 [2024-10-16 09:23:54.329098] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:30.723 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:30.723 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:09:30.723 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:30.723 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:30.723 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:30.723 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:30.723 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:30.723 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.723 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:30.723 [2024-10-16 09:23:55.055281] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:30.723 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.723 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:30.723 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.723 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:30.723 Malloc0 00:09:30.723 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.723 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:30.723 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.723 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:30.723 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.723 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:30.723 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.723 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:30.723 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.723 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:30.723 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.723 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:30.723 [2024-10-16 09:23:55.127441] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:30.983 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.983 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:30.983 test case1: single bdev can't be used in multiple subsystems 00:09:30.983 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:30.983 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.983 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:30.983 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.983 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:09:30.983 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.983 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:30.983 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.983 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:30.983 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:30.983 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.983 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:30.983 [2024-10-16 09:23:55.151266] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:30.983 [2024-10-16 09:23:55.151303] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:30.983 [2024-10-16 09:23:55.151329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.983 request: 00:09:30.983 { 00:09:30.984 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:30.984 "namespace": { 00:09:30.984 "bdev_name": "Malloc0", 00:09:30.984 "no_auto_visible": false 00:09:30.984 }, 00:09:30.984 "method": "nvmf_subsystem_add_ns", 00:09:30.984 "req_id": 1 00:09:30.984 } 00:09:30.984 Got JSON-RPC error response 00:09:30.984 response: 00:09:30.984 { 00:09:30.984 "code": -32602, 00:09:30.984 "message": "Invalid parameters" 00:09:30.984 } 00:09:30.984 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:30.984 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:30.984 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:30.984 Adding namespace failed - expected result. 00:09:30.984 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:30.984 test case2: host connect to nvmf target in multiple paths 00:09:30.984 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:30.984 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:09:30.984 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.984 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:30.984 [2024-10-16 09:23:55.163361] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:09:30.984 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.984 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid=5989d9e2-d339-420e-a2f4-bd87604f111f -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:09:30.984 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid=5989d9e2-d339-420e-a2f4-bd87604f111f -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:09:31.246 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:31.246 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:09:31.246 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:31.246 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:31.246 09:23:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:09:33.150 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:33.150 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:33.150 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:33.150 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:33.150 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:33.150 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:09:33.150 09:23:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:33.150 [global] 00:09:33.150 thread=1 00:09:33.150 invalidate=1 00:09:33.150 rw=write 00:09:33.150 time_based=1 00:09:33.150 runtime=1 00:09:33.150 ioengine=libaio 00:09:33.150 direct=1 00:09:33.150 bs=4096 00:09:33.150 iodepth=1 00:09:33.150 norandommap=0 00:09:33.150 numjobs=1 00:09:33.150 00:09:33.150 verify_dump=1 00:09:33.150 verify_backlog=512 00:09:33.150 verify_state_save=0 00:09:33.150 do_verify=1 00:09:33.150 verify=crc32c-intel 00:09:33.150 [job0] 00:09:33.150 filename=/dev/nvme0n1 00:09:33.150 Could not set queue depth (nvme0n1) 00:09:33.409 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:33.409 fio-3.35 00:09:33.409 Starting 1 thread 00:09:34.786 00:09:34.786 job0: (groupid=0, jobs=1): err= 0: pid=65962: Wed Oct 16 09:23:58 2024 00:09:34.786 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:09:34.786 slat (nsec): min=12407, max=63524, avg=16321.62, stdev=5898.96 00:09:34.786 clat (usec): min=158, max=6238, avg=278.46, stdev=273.57 00:09:34.786 lat (usec): min=173, max=6293, avg=294.78, stdev=274.60 00:09:34.786 clat percentiles (usec): 00:09:34.786 | 1.00th=[ 178], 5.00th=[ 202], 10.00th=[ 215], 20.00th=[ 229], 00:09:34.786 | 30.00th=[ 239], 40.00th=[ 249], 50.00th=[ 258], 60.00th=[ 265], 00:09:34.786 | 70.00th=[ 277], 80.00th=[ 289], 90.00th=[ 310], 95.00th=[ 334], 00:09:34.786 | 99.00th=[ 412], 99.50th=[ 873], 99.90th=[ 4490], 99.95th=[ 4621], 00:09:34.786 | 99.99th=[ 6259] 00:09:34.786 write: IOPS=2253, BW=9015KiB/s (9231kB/s)(9024KiB/1001msec); 0 zone resets 00:09:34.786 slat (usec): min=15, max=104, avg=22.82, stdev= 8.05 00:09:34.786 clat (usec): min=92, max=790, avg=149.70, stdev=30.36 00:09:34.786 lat (usec): min=111, max=828, avg=172.52, stdev=32.00 00:09:34.786 clat percentiles (usec): 00:09:34.786 | 1.00th=[ 103], 5.00th=[ 113], 10.00th=[ 119], 20.00th=[ 129], 00:09:34.786 | 30.00th=[ 137], 40.00th=[ 141], 50.00th=[ 147], 60.00th=[ 153], 00:09:34.786 | 70.00th=[ 159], 80.00th=[ 169], 90.00th=[ 184], 95.00th=[ 196], 00:09:34.786 | 99.00th=[ 221], 99.50th=[ 247], 99.90th=[ 400], 99.95th=[ 486], 00:09:34.786 | 99.99th=[ 791] 00:09:34.786 bw ( KiB/s): min=10200, max=10200, per=100.00%, avg=10200.00, stdev= 0.00, samples=1 00:09:34.786 iops : min= 2550, max= 2550, avg=2550.00, stdev= 0.00, samples=1 00:09:34.786 lat (usec) : 100=0.28%, 250=71.75%, 500=27.67%, 750=0.02%, 1000=0.05% 00:09:34.786 lat (msec) : 4=0.14%, 10=0.09% 00:09:34.786 cpu : usr=1.90%, sys=6.30%, ctx=4304, majf=0, minf=5 00:09:34.786 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:34.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.786 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.786 issued rwts: total=2048,2256,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.786 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:34.786 00:09:34.786 Run status group 0 (all jobs): 00:09:34.786 READ: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:09:34.786 WRITE: bw=9015KiB/s (9231kB/s), 9015KiB/s-9015KiB/s (9231kB/s-9231kB/s), io=9024KiB (9241kB), run=1001-1001msec 00:09:34.786 00:09:34.786 Disk stats (read/write): 00:09:34.786 nvme0n1: ios=1887/2048, merge=0/0, ticks=539/343, in_queue=882, util=90.68% 00:09:34.786 09:23:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:34.786 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:34.786 09:23:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:34.786 09:23:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:09:34.786 09:23:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:34.786 09:23:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:34.786 09:23:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:34.786 09:23:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:34.786 09:23:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:09:34.786 09:23:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:34.786 09:23:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:34.786 09:23:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:34.786 09:23:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:34.786 09:23:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:34.786 09:23:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:34.786 09:23:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:34.786 09:23:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:34.786 rmmod nvme_tcp 00:09:34.786 rmmod nvme_fabrics 00:09:34.786 rmmod nvme_keyring 00:09:34.786 09:23:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:34.786 09:23:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:34.786 09:23:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:34.786 09:23:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 65869 ']' 00:09:34.786 09:23:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 65869 00:09:34.786 09:23:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 65869 ']' 00:09:34.786 09:23:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 65869 00:09:34.786 09:23:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:09:34.786 09:23:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:34.786 09:23:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65869 00:09:34.786 09:23:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:34.786 09:23:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:34.786 09:23:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65869' 00:09:34.786 killing process with pid 65869 00:09:34.786 09:23:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 65869 00:09:34.786 09:23:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 65869 00:09:35.046 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:35.046 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:35.046 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:35.046 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:35.046 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:09:35.046 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:35.046 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:09:35.046 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:35.046 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:35.046 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:35.046 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:35.046 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:35.046 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:35.046 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:35.046 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:35.046 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:35.046 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:35.046 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:35.046 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:35.046 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:35.046 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:35.046 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:35.046 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:35.046 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.046 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:35.046 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.046 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:09:35.046 00:09:35.046 real 0m6.047s 00:09:35.046 user 0m18.892s 00:09:35.046 sys 0m2.015s 00:09:35.046 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:35.046 ************************************ 00:09:35.046 END TEST nvmf_nmic 00:09:35.046 ************************************ 00:09:35.046 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:35.306 09:23:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:35.306 09:23:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:35.306 09:23:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:35.306 09:23:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:35.306 ************************************ 00:09:35.306 START TEST nvmf_fio_target 00:09:35.306 ************************************ 00:09:35.306 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:35.306 * Looking for test storage... 00:09:35.306 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:35.306 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:35.306 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:09:35.306 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:35.306 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:35.306 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:35.306 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:35.306 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:35.306 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:35.306 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:35.306 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:35.306 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:35.306 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:35.306 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:35.306 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:35.306 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:35.306 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:35.306 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:35.306 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:35.306 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:35.306 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:35.306 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:35.306 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:35.306 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:35.306 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:35.306 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:35.306 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:35.306 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:35.306 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:35.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.307 --rc genhtml_branch_coverage=1 00:09:35.307 --rc genhtml_function_coverage=1 00:09:35.307 --rc genhtml_legend=1 00:09:35.307 --rc geninfo_all_blocks=1 00:09:35.307 --rc geninfo_unexecuted_blocks=1 00:09:35.307 00:09:35.307 ' 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:35.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.307 --rc genhtml_branch_coverage=1 00:09:35.307 --rc genhtml_function_coverage=1 00:09:35.307 --rc genhtml_legend=1 00:09:35.307 --rc geninfo_all_blocks=1 00:09:35.307 --rc geninfo_unexecuted_blocks=1 00:09:35.307 00:09:35.307 ' 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:35.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.307 --rc genhtml_branch_coverage=1 00:09:35.307 --rc genhtml_function_coverage=1 00:09:35.307 --rc genhtml_legend=1 00:09:35.307 --rc geninfo_all_blocks=1 00:09:35.307 --rc geninfo_unexecuted_blocks=1 00:09:35.307 00:09:35.307 ' 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:35.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.307 --rc genhtml_branch_coverage=1 00:09:35.307 --rc genhtml_function_coverage=1 00:09:35.307 --rc genhtml_legend=1 00:09:35.307 --rc geninfo_all_blocks=1 00:09:35.307 --rc geninfo_unexecuted_blocks=1 00:09:35.307 00:09:35.307 ' 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5989d9e2-d339-420e-a2f4-bd87604f111f 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:35.307 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@458 -- # nvmf_veth_init 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:35.307 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:35.566 Cannot find device "nvmf_init_br" 00:09:35.566 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:09:35.566 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:35.566 Cannot find device "nvmf_init_br2" 00:09:35.566 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:09:35.566 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:35.566 Cannot find device "nvmf_tgt_br" 00:09:35.566 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:09:35.566 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:35.566 Cannot find device "nvmf_tgt_br2" 00:09:35.566 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:09:35.566 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:35.566 Cannot find device "nvmf_init_br" 00:09:35.566 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:09:35.566 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:35.566 Cannot find device "nvmf_init_br2" 00:09:35.566 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:09:35.566 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:35.566 Cannot find device "nvmf_tgt_br" 00:09:35.566 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:09:35.566 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:35.566 Cannot find device "nvmf_tgt_br2" 00:09:35.566 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:09:35.566 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:35.566 Cannot find device "nvmf_br" 00:09:35.566 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:09:35.566 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:35.566 Cannot find device "nvmf_init_if" 00:09:35.566 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:09:35.566 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:35.566 Cannot find device "nvmf_init_if2" 00:09:35.566 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:09:35.566 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:35.566 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:35.566 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:09:35.566 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:35.566 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:35.566 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:09:35.566 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:35.566 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:35.566 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:35.566 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:35.566 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:35.566 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:35.566 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:35.566 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:35.566 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:35.566 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:35.566 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:35.566 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:35.566 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:35.566 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:35.566 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:35.566 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:35.566 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:35.566 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:35.566 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:35.566 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:35.825 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:35.825 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:35.825 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:35.825 09:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:35.825 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:35.825 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:35.825 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:35.825 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:35.825 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:35.825 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:35.825 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:35.825 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:35.825 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:35.825 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:35.825 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:09:35.825 00:09:35.825 --- 10.0.0.3 ping statistics --- 00:09:35.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.825 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:09:35.825 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:35.825 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:35.825 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:09:35.825 00:09:35.825 --- 10.0.0.4 ping statistics --- 00:09:35.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.825 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:09:35.825 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:35.825 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:35.825 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:09:35.825 00:09:35.825 --- 10.0.0.1 ping statistics --- 00:09:35.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.825 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:09:35.825 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:35.825 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:35.825 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:09:35.825 00:09:35.825 --- 10.0.0.2 ping statistics --- 00:09:35.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.825 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:09:35.825 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:35.825 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # return 0 00:09:35.825 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:35.825 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:35.825 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:35.825 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:35.825 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:35.825 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:35.825 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:35.825 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:35.825 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:35.825 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:35.825 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:35.825 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:35.825 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=66190 00:09:35.825 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 66190 00:09:35.825 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 66190 ']' 00:09:35.825 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.825 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:35.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.825 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.825 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:35.825 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:35.825 [2024-10-16 09:24:00.136258] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:09:35.825 [2024-10-16 09:24:00.136345] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:36.084 [2024-10-16 09:24:00.266454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:36.084 [2024-10-16 09:24:00.309746] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:36.084 [2024-10-16 09:24:00.309829] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:36.084 [2024-10-16 09:24:00.309856] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:36.084 [2024-10-16 09:24:00.309864] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:36.084 [2024-10-16 09:24:00.309871] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:36.084 [2024-10-16 09:24:00.311163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:36.084 [2024-10-16 09:24:00.311308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:36.084 [2024-10-16 09:24:00.311435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.084 [2024-10-16 09:24:00.311435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:36.084 [2024-10-16 09:24:00.364061] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:36.084 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:36.084 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:09:36.084 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:36.084 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:36.084 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:36.084 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:36.084 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:36.652 [2024-10-16 09:24:00.755037] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:36.652 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:36.652 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:36.652 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:37.219 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:37.219 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:37.219 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:37.219 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:37.478 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:37.478 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:38.046 09:24:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:38.046 09:24:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:38.046 09:24:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:38.305 09:24:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:38.305 09:24:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:38.564 09:24:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:38.564 09:24:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:38.822 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:39.081 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:39.081 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:39.340 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:39.340 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:39.599 09:24:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:39.858 [2024-10-16 09:24:04.107569] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:39.858 09:24:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:40.117 09:24:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:40.376 09:24:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid=5989d9e2-d339-420e-a2f4-bd87604f111f -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:09:40.635 09:24:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:40.635 09:24:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:09:40.635 09:24:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:40.635 09:24:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:09:40.635 09:24:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:09:40.635 09:24:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:09:42.575 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:42.575 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:42.575 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:42.575 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:09:42.576 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:42.576 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:09:42.576 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:42.576 [global] 00:09:42.576 thread=1 00:09:42.576 invalidate=1 00:09:42.576 rw=write 00:09:42.576 time_based=1 00:09:42.576 runtime=1 00:09:42.576 ioengine=libaio 00:09:42.576 direct=1 00:09:42.576 bs=4096 00:09:42.576 iodepth=1 00:09:42.576 norandommap=0 00:09:42.576 numjobs=1 00:09:42.576 00:09:42.576 verify_dump=1 00:09:42.576 verify_backlog=512 00:09:42.576 verify_state_save=0 00:09:42.576 do_verify=1 00:09:42.576 verify=crc32c-intel 00:09:42.576 [job0] 00:09:42.576 filename=/dev/nvme0n1 00:09:42.576 [job1] 00:09:42.576 filename=/dev/nvme0n2 00:09:42.576 [job2] 00:09:42.576 filename=/dev/nvme0n3 00:09:42.576 [job3] 00:09:42.576 filename=/dev/nvme0n4 00:09:42.576 Could not set queue depth (nvme0n1) 00:09:42.576 Could not set queue depth (nvme0n2) 00:09:42.576 Could not set queue depth (nvme0n3) 00:09:42.576 Could not set queue depth (nvme0n4) 00:09:42.834 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:42.834 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:42.834 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:42.834 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:42.834 fio-3.35 00:09:42.834 Starting 4 threads 00:09:44.211 00:09:44.211 job0: (groupid=0, jobs=1): err= 0: pid=66372: Wed Oct 16 09:24:08 2024 00:09:44.211 read: IOPS=1764, BW=7057KiB/s (7226kB/s)(7064KiB/1001msec) 00:09:44.211 slat (nsec): min=11670, max=63527, avg=17039.47, stdev=5149.49 00:09:44.211 clat (usec): min=136, max=2548, avg=282.88, stdev=100.16 00:09:44.211 lat (usec): min=148, max=2589, avg=299.92, stdev=102.03 00:09:44.211 clat percentiles (usec): 00:09:44.211 | 1.00th=[ 153], 5.00th=[ 229], 10.00th=[ 241], 20.00th=[ 251], 00:09:44.211 | 30.00th=[ 258], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 277], 00:09:44.211 | 70.00th=[ 281], 80.00th=[ 297], 90.00th=[ 334], 95.00th=[ 383], 00:09:44.211 | 99.00th=[ 519], 99.50th=[ 553], 99.90th=[ 2540], 99.95th=[ 2540], 00:09:44.211 | 99.99th=[ 2540] 00:09:44.211 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:44.211 slat (nsec): min=15537, max=92887, avg=24593.56, stdev=7523.35 00:09:44.211 clat (usec): min=94, max=518, avg=201.33, stdev=62.40 00:09:44.211 lat (usec): min=111, max=542, avg=225.93, stdev=65.83 00:09:44.211 clat percentiles (usec): 00:09:44.211 | 1.00th=[ 108], 5.00th=[ 117], 10.00th=[ 124], 20.00th=[ 159], 00:09:44.211 | 30.00th=[ 178], 40.00th=[ 188], 50.00th=[ 194], 60.00th=[ 202], 00:09:44.211 | 70.00th=[ 212], 80.00th=[ 227], 90.00th=[ 293], 95.00th=[ 338], 00:09:44.211 | 99.00th=[ 392], 99.50th=[ 408], 99.90th=[ 424], 99.95th=[ 449], 00:09:44.211 | 99.99th=[ 519] 00:09:44.211 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:09:44.211 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:44.211 lat (usec) : 100=0.08%, 250=54.14%, 500=45.07%, 750=0.58%, 1000=0.03% 00:09:44.211 lat (msec) : 2=0.05%, 4=0.05% 00:09:44.211 cpu : usr=1.90%, sys=6.10%, ctx=3815, majf=0, minf=13 00:09:44.211 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:44.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.211 issued rwts: total=1766,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.211 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:44.211 job1: (groupid=0, jobs=1): err= 0: pid=66373: Wed Oct 16 09:24:08 2024 00:09:44.211 read: IOPS=3029, BW=11.8MiB/s (12.4MB/s)(11.8MiB/1001msec) 00:09:44.211 slat (nsec): min=11767, max=59886, avg=14878.64, stdev=3554.49 00:09:44.211 clat (usec): min=132, max=1563, avg=165.24, stdev=29.88 00:09:44.211 lat (usec): min=145, max=1576, avg=180.12, stdev=30.42 00:09:44.211 clat percentiles (usec): 00:09:44.211 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 153], 00:09:44.211 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 167], 00:09:44.211 | 70.00th=[ 172], 80.00th=[ 178], 90.00th=[ 186], 95.00th=[ 194], 00:09:44.211 | 99.00th=[ 206], 99.50th=[ 210], 99.90th=[ 221], 99.95th=[ 453], 00:09:44.211 | 99.99th=[ 1565] 00:09:44.211 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:44.211 slat (usec): min=13, max=118, avg=21.11, stdev= 5.27 00:09:44.211 clat (usec): min=90, max=579, avg=123.32, stdev=18.21 00:09:44.211 lat (usec): min=106, max=614, avg=144.43, stdev=19.59 00:09:44.211 clat percentiles (usec): 00:09:44.211 | 1.00th=[ 97], 5.00th=[ 103], 10.00th=[ 106], 20.00th=[ 111], 00:09:44.211 | 30.00th=[ 115], 40.00th=[ 119], 50.00th=[ 122], 60.00th=[ 125], 00:09:44.211 | 70.00th=[ 129], 80.00th=[ 135], 90.00th=[ 143], 95.00th=[ 151], 00:09:44.211 | 99.00th=[ 165], 99.50th=[ 174], 99.90th=[ 233], 99.95th=[ 494], 00:09:44.211 | 99.99th=[ 578] 00:09:44.211 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:09:44.211 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:44.211 lat (usec) : 100=1.26%, 250=98.64%, 500=0.07%, 750=0.02% 00:09:44.211 lat (msec) : 2=0.02% 00:09:44.211 cpu : usr=2.40%, sys=8.40%, ctx=6107, majf=0, minf=11 00:09:44.211 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:44.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.211 issued rwts: total=3033,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.211 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:44.211 job2: (groupid=0, jobs=1): err= 0: pid=66374: Wed Oct 16 09:24:08 2024 00:09:44.211 read: IOPS=2909, BW=11.4MiB/s (11.9MB/s)(11.4MiB/1001msec) 00:09:44.211 slat (nsec): min=12048, max=43468, avg=14328.50, stdev=2956.44 00:09:44.211 clat (usec): min=138, max=428, avg=170.04, stdev=16.17 00:09:44.211 lat (usec): min=151, max=441, avg=184.36, stdev=17.01 00:09:44.211 clat percentiles (usec): 00:09:44.211 | 1.00th=[ 147], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 157], 00:09:44.211 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 172], 00:09:44.211 | 70.00th=[ 178], 80.00th=[ 184], 90.00th=[ 192], 95.00th=[ 200], 00:09:44.211 | 99.00th=[ 212], 99.50th=[ 221], 99.90th=[ 235], 99.95th=[ 235], 00:09:44.211 | 99.99th=[ 429] 00:09:44.211 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:44.211 slat (nsec): min=13202, max=89399, avg=20352.92, stdev=5130.80 00:09:44.211 clat (usec): min=97, max=438, avg=127.46, stdev=15.64 00:09:44.211 lat (usec): min=114, max=457, avg=147.82, stdev=17.47 00:09:44.211 clat percentiles (usec): 00:09:44.211 | 1.00th=[ 103], 5.00th=[ 109], 10.00th=[ 112], 20.00th=[ 116], 00:09:44.211 | 30.00th=[ 119], 40.00th=[ 123], 50.00th=[ 126], 60.00th=[ 129], 00:09:44.211 | 70.00th=[ 133], 80.00th=[ 139], 90.00th=[ 149], 95.00th=[ 155], 00:09:44.211 | 99.00th=[ 169], 99.50th=[ 176], 99.90th=[ 192], 99.95th=[ 198], 00:09:44.211 | 99.99th=[ 441] 00:09:44.211 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:09:44.211 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:44.211 lat (usec) : 100=0.13%, 250=99.83%, 500=0.03% 00:09:44.211 cpu : usr=2.20%, sys=8.10%, ctx=5984, majf=0, minf=5 00:09:44.211 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:44.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.211 issued rwts: total=2912,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.211 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:44.211 job3: (groupid=0, jobs=1): err= 0: pid=66375: Wed Oct 16 09:24:08 2024 00:09:44.211 read: IOPS=1847, BW=7389KiB/s (7566kB/s)(7396KiB/1001msec) 00:09:44.211 slat (nsec): min=11669, max=49930, avg=15170.70, stdev=4790.34 00:09:44.211 clat (usec): min=150, max=853, avg=287.85, stdev=60.39 00:09:44.211 lat (usec): min=164, max=866, avg=303.02, stdev=62.46 00:09:44.211 clat percentiles (usec): 00:09:44.211 | 1.00th=[ 172], 5.00th=[ 237], 10.00th=[ 245], 20.00th=[ 253], 00:09:44.211 | 30.00th=[ 262], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 281], 00:09:44.211 | 70.00th=[ 289], 80.00th=[ 306], 90.00th=[ 347], 95.00th=[ 437], 00:09:44.211 | 99.00th=[ 529], 99.50th=[ 537], 99.90th=[ 570], 99.95th=[ 857], 00:09:44.211 | 99.99th=[ 857] 00:09:44.211 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:44.211 slat (nsec): min=15898, max=88870, avg=21199.40, stdev=5022.68 00:09:44.211 clat (usec): min=93, max=6194, avg=190.26, stdev=175.45 00:09:44.211 lat (usec): min=110, max=6211, avg=211.46, stdev=176.18 00:09:44.212 clat percentiles (usec): 00:09:44.212 | 1.00th=[ 106], 5.00th=[ 116], 10.00th=[ 122], 20.00th=[ 139], 00:09:44.212 | 30.00th=[ 176], 40.00th=[ 184], 50.00th=[ 190], 60.00th=[ 198], 00:09:44.212 | 70.00th=[ 204], 80.00th=[ 215], 90.00th=[ 227], 95.00th=[ 237], 00:09:44.212 | 99.00th=[ 265], 99.50th=[ 334], 99.90th=[ 2900], 99.95th=[ 3654], 00:09:44.212 | 99.99th=[ 6194] 00:09:44.212 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:09:44.212 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:44.212 lat (usec) : 100=0.08%, 250=58.81%, 500=39.88%, 750=1.10%, 1000=0.03% 00:09:44.212 lat (msec) : 4=0.08%, 10=0.03% 00:09:44.212 cpu : usr=1.80%, sys=5.40%, ctx=3897, majf=0, minf=9 00:09:44.212 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:44.212 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.212 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.212 issued rwts: total=1849,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.212 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:44.212 00:09:44.212 Run status group 0 (all jobs): 00:09:44.212 READ: bw=37.3MiB/s (39.1MB/s), 7057KiB/s-11.8MiB/s (7226kB/s-12.4MB/s), io=37.3MiB (39.2MB), run=1001-1001msec 00:09:44.212 WRITE: bw=40.0MiB/s (41.9MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=40.0MiB (41.9MB), run=1001-1001msec 00:09:44.212 00:09:44.212 Disk stats (read/write): 00:09:44.212 nvme0n1: ios=1586/1621, merge=0/0, ticks=448/355, in_queue=803, util=87.07% 00:09:44.212 nvme0n2: ios=2599/2632, merge=0/0, ticks=468/351, in_queue=819, util=88.13% 00:09:44.212 nvme0n3: ios=2515/2560, merge=0/0, ticks=442/339, in_queue=781, util=89.17% 00:09:44.212 nvme0n4: ios=1536/1886, merge=0/0, ticks=431/358, in_queue=789, util=89.21% 00:09:44.212 09:24:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:44.212 [global] 00:09:44.212 thread=1 00:09:44.212 invalidate=1 00:09:44.212 rw=randwrite 00:09:44.212 time_based=1 00:09:44.212 runtime=1 00:09:44.212 ioengine=libaio 00:09:44.212 direct=1 00:09:44.212 bs=4096 00:09:44.212 iodepth=1 00:09:44.212 norandommap=0 00:09:44.212 numjobs=1 00:09:44.212 00:09:44.212 verify_dump=1 00:09:44.212 verify_backlog=512 00:09:44.212 verify_state_save=0 00:09:44.212 do_verify=1 00:09:44.212 verify=crc32c-intel 00:09:44.212 [job0] 00:09:44.212 filename=/dev/nvme0n1 00:09:44.212 [job1] 00:09:44.212 filename=/dev/nvme0n2 00:09:44.212 [job2] 00:09:44.212 filename=/dev/nvme0n3 00:09:44.212 [job3] 00:09:44.212 filename=/dev/nvme0n4 00:09:44.212 Could not set queue depth (nvme0n1) 00:09:44.212 Could not set queue depth (nvme0n2) 00:09:44.212 Could not set queue depth (nvme0n3) 00:09:44.212 Could not set queue depth (nvme0n4) 00:09:44.212 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:44.212 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:44.212 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:44.212 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:44.212 fio-3.35 00:09:44.212 Starting 4 threads 00:09:45.588 00:09:45.588 job0: (groupid=0, jobs=1): err= 0: pid=66428: Wed Oct 16 09:24:09 2024 00:09:45.588 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:09:45.588 slat (nsec): min=11096, max=38715, avg=12837.35, stdev=1932.77 00:09:45.588 clat (usec): min=131, max=216, avg=157.60, stdev=11.92 00:09:45.588 lat (usec): min=143, max=229, avg=170.44, stdev=12.18 00:09:45.588 clat percentiles (usec): 00:09:45.588 | 1.00th=[ 137], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 149], 00:09:45.588 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 159], 00:09:45.588 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 176], 95.00th=[ 182], 00:09:45.588 | 99.00th=[ 192], 99.50th=[ 196], 99.90th=[ 202], 99.95th=[ 208], 00:09:45.588 | 99.99th=[ 217] 00:09:45.588 write: IOPS=3374, BW=13.2MiB/s (13.8MB/s)(13.2MiB/1001msec); 0 zone resets 00:09:45.588 slat (nsec): min=13542, max=56736, avg=18923.04, stdev=3339.08 00:09:45.588 clat (usec): min=86, max=1525, avg=118.99, stdev=28.07 00:09:45.588 lat (usec): min=107, max=1544, avg=137.91, stdev=28.29 00:09:45.588 clat percentiles (usec): 00:09:45.588 | 1.00th=[ 94], 5.00th=[ 100], 10.00th=[ 104], 20.00th=[ 109], 00:09:45.588 | 30.00th=[ 112], 40.00th=[ 115], 50.00th=[ 118], 60.00th=[ 122], 00:09:45.588 | 70.00th=[ 125], 80.00th=[ 128], 90.00th=[ 135], 95.00th=[ 141], 00:09:45.588 | 99.00th=[ 153], 99.50th=[ 157], 99.90th=[ 172], 99.95th=[ 453], 00:09:45.588 | 99.99th=[ 1532] 00:09:45.588 bw ( KiB/s): min=13472, max=13472, per=32.00%, avg=13472.00, stdev= 0.00, samples=1 00:09:45.588 iops : min= 3368, max= 3368, avg=3368.00, stdev= 0.00, samples=1 00:09:45.588 lat (usec) : 100=2.36%, 250=97.60%, 500=0.03% 00:09:45.588 lat (msec) : 2=0.02% 00:09:45.588 cpu : usr=2.80%, sys=7.80%, ctx=6450, majf=0, minf=9 00:09:45.588 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:45.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.588 issued rwts: total=3072,3378,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.588 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:45.588 job1: (groupid=0, jobs=1): err= 0: pid=66429: Wed Oct 16 09:24:09 2024 00:09:45.588 read: IOPS=1936, BW=7744KiB/s (7930kB/s)(7752KiB/1001msec) 00:09:45.588 slat (nsec): min=11558, max=98891, avg=13730.69, stdev=4157.79 00:09:45.588 clat (usec): min=183, max=473, avg=270.68, stdev=26.11 00:09:45.588 lat (usec): min=196, max=540, avg=284.42, stdev=27.13 00:09:45.588 clat percentiles (usec): 00:09:45.588 | 1.00th=[ 231], 5.00th=[ 239], 10.00th=[ 245], 20.00th=[ 253], 00:09:45.588 | 30.00th=[ 258], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 273], 00:09:45.588 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 297], 95.00th=[ 314], 00:09:45.588 | 99.00th=[ 363], 99.50th=[ 408], 99.90th=[ 457], 99.95th=[ 474], 00:09:45.588 | 99.99th=[ 474] 00:09:45.588 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:45.588 slat (usec): min=15, max=104, avg=19.80, stdev= 5.78 00:09:45.588 clat (usec): min=89, max=2054, avg=196.12, stdev=59.00 00:09:45.588 lat (usec): min=111, max=2085, avg=215.92, stdev=60.60 00:09:45.588 clat percentiles (usec): 00:09:45.588 | 1.00th=[ 104], 5.00th=[ 124], 10.00th=[ 172], 20.00th=[ 180], 00:09:45.588 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 198], 00:09:45.588 | 70.00th=[ 202], 80.00th=[ 208], 90.00th=[ 221], 95.00th=[ 237], 00:09:45.588 | 99.00th=[ 343], 99.50th=[ 379], 99.90th=[ 635], 99.95th=[ 1123], 00:09:45.588 | 99.99th=[ 2057] 00:09:45.588 bw ( KiB/s): min= 8192, max= 8192, per=19.46%, avg=8192.00, stdev= 0.00, samples=1 00:09:45.588 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:45.588 lat (usec) : 100=0.33%, 250=56.87%, 500=42.72%, 750=0.03% 00:09:45.588 lat (msec) : 2=0.03%, 4=0.03% 00:09:45.588 cpu : usr=1.10%, sys=5.50%, ctx=3996, majf=0, minf=7 00:09:45.588 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:45.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.588 issued rwts: total=1938,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.588 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:45.588 job2: (groupid=0, jobs=1): err= 0: pid=66430: Wed Oct 16 09:24:09 2024 00:09:45.588 read: IOPS=1917, BW=7668KiB/s (7852kB/s)(7676KiB/1001msec) 00:09:45.588 slat (nsec): min=11304, max=38210, avg=13200.78, stdev=2255.76 00:09:45.588 clat (usec): min=224, max=526, avg=271.09, stdev=28.58 00:09:45.588 lat (usec): min=237, max=542, avg=284.29, stdev=29.01 00:09:45.588 clat percentiles (usec): 00:09:45.588 | 1.00th=[ 231], 5.00th=[ 241], 10.00th=[ 247], 20.00th=[ 253], 00:09:45.588 | 30.00th=[ 258], 40.00th=[ 262], 50.00th=[ 269], 60.00th=[ 273], 00:09:45.588 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 297], 95.00th=[ 314], 00:09:45.588 | 99.00th=[ 371], 99.50th=[ 469], 99.90th=[ 515], 99.95th=[ 529], 00:09:45.588 | 99.99th=[ 529] 00:09:45.588 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:45.588 slat (nsec): min=15342, max=61538, avg=19435.03, stdev=4529.97 00:09:45.588 clat (usec): min=96, max=6173, avg=199.47, stdev=151.27 00:09:45.588 lat (usec): min=113, max=6193, avg=218.90, stdev=151.69 00:09:45.588 clat percentiles (usec): 00:09:45.588 | 1.00th=[ 114], 5.00th=[ 133], 10.00th=[ 174], 20.00th=[ 182], 00:09:45.588 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 198], 00:09:45.588 | 70.00th=[ 202], 80.00th=[ 208], 90.00th=[ 219], 95.00th=[ 229], 00:09:45.588 | 99.00th=[ 347], 99.50th=[ 469], 99.90th=[ 1876], 99.95th=[ 2147], 00:09:45.588 | 99.99th=[ 6194] 00:09:45.588 bw ( KiB/s): min= 8192, max= 8192, per=19.46%, avg=8192.00, stdev= 0.00, samples=1 00:09:45.588 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:45.588 lat (usec) : 100=0.03%, 250=57.90%, 500=41.77%, 750=0.15%, 1000=0.03% 00:09:45.588 lat (msec) : 2=0.08%, 4=0.03%, 10=0.03% 00:09:45.588 cpu : usr=1.30%, sys=5.20%, ctx=3967, majf=0, minf=17 00:09:45.588 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:45.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.588 issued rwts: total=1919,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.588 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:45.588 job3: (groupid=0, jobs=1): err= 0: pid=66431: Wed Oct 16 09:24:09 2024 00:09:45.588 read: IOPS=2899, BW=11.3MiB/s (11.9MB/s)(11.3MiB/1002msec) 00:09:45.588 slat (nsec): min=11597, max=51207, avg=13225.32, stdev=2294.22 00:09:45.588 clat (usec): min=127, max=884, avg=170.26, stdev=19.06 00:09:45.588 lat (usec): min=153, max=904, avg=183.49, stdev=19.32 00:09:45.588 clat percentiles (usec): 00:09:45.588 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 159], 00:09:45.588 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 172], 00:09:45.588 | 70.00th=[ 176], 80.00th=[ 182], 90.00th=[ 190], 95.00th=[ 196], 00:09:45.588 | 99.00th=[ 212], 99.50th=[ 215], 99.90th=[ 225], 99.95th=[ 241], 00:09:45.588 | 99.99th=[ 881] 00:09:45.588 write: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec); 0 zone resets 00:09:45.588 slat (nsec): min=13203, max=80812, avg=19423.15, stdev=4127.03 00:09:45.588 clat (usec): min=98, max=1660, avg=129.51, stdev=32.92 00:09:45.588 lat (usec): min=116, max=1694, avg=148.93, stdev=33.67 00:09:45.588 clat percentiles (usec): 00:09:45.588 | 1.00th=[ 104], 5.00th=[ 109], 10.00th=[ 113], 20.00th=[ 118], 00:09:45.588 | 30.00th=[ 122], 40.00th=[ 125], 50.00th=[ 128], 60.00th=[ 131], 00:09:45.588 | 70.00th=[ 135], 80.00th=[ 139], 90.00th=[ 147], 95.00th=[ 153], 00:09:45.588 | 99.00th=[ 169], 99.50th=[ 178], 99.90th=[ 375], 99.95th=[ 586], 00:09:45.588 | 99.99th=[ 1663] 00:09:45.588 bw ( KiB/s): min=12288, max=12288, per=29.19%, avg=12288.00, stdev= 0.00, samples=1 00:09:45.588 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:45.588 lat (usec) : 100=0.07%, 250=99.80%, 500=0.08%, 750=0.02%, 1000=0.02% 00:09:45.588 lat (msec) : 2=0.02% 00:09:45.588 cpu : usr=1.70%, sys=8.29%, ctx=5981, majf=0, minf=13 00:09:45.588 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:45.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.588 issued rwts: total=2905,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.588 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:45.588 00:09:45.588 Run status group 0 (all jobs): 00:09:45.588 READ: bw=38.3MiB/s (40.2MB/s), 7668KiB/s-12.0MiB/s (7852kB/s-12.6MB/s), io=38.4MiB (40.3MB), run=1001-1002msec 00:09:45.588 WRITE: bw=41.1MiB/s (43.1MB/s), 8184KiB/s-13.2MiB/s (8380kB/s-13.8MB/s), io=41.2MiB (43.2MB), run=1001-1002msec 00:09:45.588 00:09:45.588 Disk stats (read/write): 00:09:45.588 nvme0n1: ios=2610/3058, merge=0/0, ticks=431/389, in_queue=820, util=88.26% 00:09:45.588 nvme0n2: ios=1585/1973, merge=0/0, ticks=452/391, in_queue=843, util=89.19% 00:09:45.588 nvme0n3: ios=1536/1917, merge=0/0, ticks=425/391, in_queue=816, util=89.00% 00:09:45.588 nvme0n4: ios=2560/2612, merge=0/0, ticks=435/353, in_queue=788, util=89.87% 00:09:45.589 09:24:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:45.589 [global] 00:09:45.589 thread=1 00:09:45.589 invalidate=1 00:09:45.589 rw=write 00:09:45.589 time_based=1 00:09:45.589 runtime=1 00:09:45.589 ioengine=libaio 00:09:45.589 direct=1 00:09:45.589 bs=4096 00:09:45.589 iodepth=128 00:09:45.589 norandommap=0 00:09:45.589 numjobs=1 00:09:45.589 00:09:45.589 verify_dump=1 00:09:45.589 verify_backlog=512 00:09:45.589 verify_state_save=0 00:09:45.589 do_verify=1 00:09:45.589 verify=crc32c-intel 00:09:45.589 [job0] 00:09:45.589 filename=/dev/nvme0n1 00:09:45.589 [job1] 00:09:45.589 filename=/dev/nvme0n2 00:09:45.589 [job2] 00:09:45.589 filename=/dev/nvme0n3 00:09:45.589 [job3] 00:09:45.589 filename=/dev/nvme0n4 00:09:45.589 Could not set queue depth (nvme0n1) 00:09:45.589 Could not set queue depth (nvme0n2) 00:09:45.589 Could not set queue depth (nvme0n3) 00:09:45.589 Could not set queue depth (nvme0n4) 00:09:45.589 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:45.589 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:45.589 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:45.589 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:45.589 fio-3.35 00:09:45.589 Starting 4 threads 00:09:46.525 00:09:46.525 job0: (groupid=0, jobs=1): err= 0: pid=66490: Wed Oct 16 09:24:10 2024 00:09:46.525 read: IOPS=4754, BW=18.6MiB/s (19.5MB/s)(18.6MiB/1003msec) 00:09:46.525 slat (usec): min=4, max=2989, avg=99.76, stdev=392.85 00:09:46.525 clat (usec): min=482, max=15964, avg=13103.56, stdev=1242.86 00:09:46.525 lat (usec): min=2606, max=15979, avg=13203.32, stdev=1187.26 00:09:46.525 clat percentiles (usec): 00:09:46.525 | 1.00th=[ 6063], 5.00th=[11338], 10.00th=[12518], 20.00th=[12911], 00:09:46.525 | 30.00th=[13042], 40.00th=[13173], 50.00th=[13304], 60.00th=[13435], 00:09:46.525 | 70.00th=[13566], 80.00th=[13698], 90.00th=[13829], 95.00th=[13960], 00:09:46.525 | 99.00th=[14353], 99.50th=[14484], 99.90th=[15139], 99.95th=[15926], 00:09:46.525 | 99.99th=[15926] 00:09:46.525 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:09:46.525 slat (usec): min=9, max=2944, avg=94.51, stdev=394.27 00:09:46.525 clat (usec): min=9550, max=16411, avg=12522.69, stdev=595.93 00:09:46.525 lat (usec): min=9841, max=16448, avg=12617.20, stdev=500.75 00:09:46.525 clat percentiles (usec): 00:09:46.525 | 1.00th=[10159], 5.00th=[11863], 10.00th=[11994], 20.00th=[12256], 00:09:46.525 | 30.00th=[12387], 40.00th=[12387], 50.00th=[12518], 60.00th=[12649], 00:09:46.525 | 70.00th=[12780], 80.00th=[12911], 90.00th=[13042], 95.00th=[13435], 00:09:46.525 | 99.00th=[14091], 99.50th=[14615], 99.90th=[15926], 99.95th=[16319], 00:09:46.525 | 99.99th=[16450] 00:09:46.525 bw ( KiB/s): min=20480, max=20521, per=26.47%, avg=20500.50, stdev=28.99, samples=2 00:09:46.525 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:09:46.525 lat (usec) : 500=0.01% 00:09:46.525 lat (msec) : 4=0.32%, 10=0.93%, 20=98.74% 00:09:46.525 cpu : usr=4.69%, sys=14.67%, ctx=461, majf=0, minf=13 00:09:46.525 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:46.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.525 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:46.525 issued rwts: total=4769,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.525 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:46.525 job1: (groupid=0, jobs=1): err= 0: pid=66491: Wed Oct 16 09:24:10 2024 00:09:46.525 read: IOPS=4999, BW=19.5MiB/s (20.5MB/s)(19.6MiB/1002msec) 00:09:46.525 slat (usec): min=4, max=3938, avg=96.83, stdev=383.86 00:09:46.525 clat (usec): min=708, max=16905, avg=12789.96, stdev=1316.49 00:09:46.525 lat (usec): min=2392, max=16944, avg=12886.79, stdev=1348.90 00:09:46.525 clat percentiles (usec): 00:09:46.525 | 1.00th=[ 7111], 5.00th=[11338], 10.00th=[11863], 20.00th=[12387], 00:09:46.525 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12780], 60.00th=[12911], 00:09:46.525 | 70.00th=[13042], 80.00th=[13435], 90.00th=[14222], 95.00th=[14746], 00:09:46.525 | 99.00th=[15401], 99.50th=[15664], 99.90th=[16057], 99.95th=[16581], 00:09:46.525 | 99.99th=[16909] 00:09:46.525 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:09:46.525 slat (usec): min=9, max=3481, avg=92.53, stdev=435.50 00:09:46.526 clat (usec): min=9785, max=16342, avg=12203.59, stdev=801.45 00:09:46.526 lat (usec): min=9805, max=16378, avg=12296.12, stdev=900.84 00:09:46.526 clat percentiles (usec): 00:09:46.526 | 1.00th=[10159], 5.00th=[11338], 10.00th=[11600], 20.00th=[11731], 00:09:46.526 | 30.00th=[11863], 40.00th=[11994], 50.00th=[12125], 60.00th=[12256], 00:09:46.526 | 70.00th=[12256], 80.00th=[12387], 90.00th=[12780], 95.00th=[14222], 00:09:46.526 | 99.00th=[15139], 99.50th=[15401], 99.90th=[15795], 99.95th=[15795], 00:09:46.526 | 99.99th=[16319] 00:09:46.526 bw ( KiB/s): min=20480, max=20521, per=26.47%, avg=20500.50, stdev=28.99, samples=2 00:09:46.526 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:09:46.526 lat (usec) : 750=0.01% 00:09:46.526 lat (msec) : 4=0.22%, 10=0.85%, 20=98.92% 00:09:46.526 cpu : usr=4.40%, sys=15.08%, ctx=361, majf=0, minf=17 00:09:46.526 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:46.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.526 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:46.526 issued rwts: total=5009,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.526 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:46.526 job2: (groupid=0, jobs=1): err= 0: pid=66492: Wed Oct 16 09:24:10 2024 00:09:46.526 read: IOPS=4263, BW=16.7MiB/s (17.5MB/s)(16.7MiB/1005msec) 00:09:46.526 slat (usec): min=5, max=6771, avg=114.94, stdev=587.27 00:09:46.526 clat (usec): min=3854, max=21756, avg=14533.31, stdev=1908.50 00:09:46.526 lat (usec): min=3868, max=21791, avg=14648.25, stdev=1953.67 00:09:46.526 clat percentiles (usec): 00:09:46.526 | 1.00th=[ 4686], 5.00th=[12125], 10.00th=[13042], 20.00th=[13960], 00:09:46.526 | 30.00th=[14222], 40.00th=[14484], 50.00th=[14615], 60.00th=[14746], 00:09:46.526 | 70.00th=[14877], 80.00th=[15139], 90.00th=[15664], 95.00th=[17433], 00:09:46.526 | 99.00th=[20055], 99.50th=[20841], 99.90th=[21627], 99.95th=[21627], 00:09:46.526 | 99.99th=[21627] 00:09:46.526 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:09:46.526 slat (usec): min=10, max=6163, avg=102.26, stdev=561.50 00:09:46.526 clat (usec): min=7945, max=21687, avg=14049.32, stdev=1570.97 00:09:46.526 lat (usec): min=7969, max=21704, avg=14151.59, stdev=1657.05 00:09:46.526 clat percentiles (usec): 00:09:46.526 | 1.00th=[ 9241], 5.00th=[12125], 10.00th=[12780], 20.00th=[13304], 00:09:46.526 | 30.00th=[13566], 40.00th=[13698], 50.00th=[13829], 60.00th=[13960], 00:09:46.526 | 70.00th=[14353], 80.00th=[14877], 90.00th=[15401], 95.00th=[16909], 00:09:46.526 | 99.00th=[19792], 99.50th=[20841], 99.90th=[21627], 99.95th=[21627], 00:09:46.526 | 99.99th=[21627] 00:09:46.526 bw ( KiB/s): min=17712, max=19152, per=23.80%, avg=18432.00, stdev=1018.23, samples=2 00:09:46.526 iops : min= 4428, max= 4788, avg=4608.00, stdev=254.56, samples=2 00:09:46.526 lat (msec) : 4=0.07%, 10=1.82%, 20=97.14%, 50=0.97% 00:09:46.526 cpu : usr=4.08%, sys=12.55%, ctx=350, majf=0, minf=11 00:09:46.526 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:46.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.526 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:46.526 issued rwts: total=4285,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.526 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:46.526 job3: (groupid=0, jobs=1): err= 0: pid=66493: Wed Oct 16 09:24:10 2024 00:09:46.526 read: IOPS=4250, BW=16.6MiB/s (17.4MB/s)(16.7MiB/1003msec) 00:09:46.526 slat (usec): min=4, max=4134, avg=109.13, stdev=438.89 00:09:46.526 clat (usec): min=532, max=18450, avg=14480.94, stdev=1522.91 00:09:46.526 lat (usec): min=2383, max=18840, avg=14590.06, stdev=1558.21 00:09:46.526 clat percentiles (usec): 00:09:46.526 | 1.00th=[ 5538], 5.00th=[12780], 10.00th=[13829], 20.00th=[14091], 00:09:46.526 | 30.00th=[14222], 40.00th=[14353], 50.00th=[14484], 60.00th=[14615], 00:09:46.526 | 70.00th=[14746], 80.00th=[15008], 90.00th=[16057], 95.00th=[16712], 00:09:46.526 | 99.00th=[17171], 99.50th=[17433], 99.90th=[17957], 99.95th=[18220], 00:09:46.526 | 99.99th=[18482] 00:09:46.526 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:09:46.526 slat (usec): min=11, max=5110, avg=108.36, stdev=539.32 00:09:46.526 clat (usec): min=11149, max=19450, avg=14073.37, stdev=987.40 00:09:46.526 lat (usec): min=11173, max=19498, avg=14181.73, stdev=1109.58 00:09:46.526 clat percentiles (usec): 00:09:46.526 | 1.00th=[11600], 5.00th=[13173], 10.00th=[13304], 20.00th=[13435], 00:09:46.526 | 30.00th=[13566], 40.00th=[13698], 50.00th=[13829], 60.00th=[14091], 00:09:46.526 | 70.00th=[14222], 80.00th=[14353], 90.00th=[15139], 95.00th=[16581], 00:09:46.526 | 99.00th=[17433], 99.50th=[17695], 99.90th=[18744], 99.95th=[19006], 00:09:46.526 | 99.99th=[19530] 00:09:46.526 bw ( KiB/s): min=18072, max=18792, per=23.80%, avg=18432.00, stdev=509.12, samples=2 00:09:46.526 iops : min= 4518, max= 4698, avg=4608.00, stdev=127.28, samples=2 00:09:46.526 lat (usec) : 750=0.01% 00:09:46.526 lat (msec) : 4=0.20%, 10=0.48%, 20=99.30% 00:09:46.526 cpu : usr=4.49%, sys=12.57%, ctx=317, majf=0, minf=9 00:09:46.526 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:46.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.526 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:46.526 issued rwts: total=4263,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.526 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:46.526 00:09:46.526 Run status group 0 (all jobs): 00:09:46.526 READ: bw=71.2MiB/s (74.7MB/s), 16.6MiB/s-19.5MiB/s (17.4MB/s-20.5MB/s), io=71.6MiB (75.1MB), run=1002-1005msec 00:09:46.526 WRITE: bw=75.6MiB/s (79.3MB/s), 17.9MiB/s-20.0MiB/s (18.8MB/s-20.9MB/s), io=76.0MiB (79.7MB), run=1002-1005msec 00:09:46.526 00:09:46.526 Disk stats (read/write): 00:09:46.526 nvme0n1: ios=4146/4419, merge=0/0, ticks=12324/11876, in_queue=24200, util=87.88% 00:09:46.526 nvme0n2: ios=4123/4606, merge=0/0, ticks=16635/15518, in_queue=32153, util=88.52% 00:09:46.526 nvme0n3: ios=3584/4045, merge=0/0, ticks=25083/24878, in_queue=49961, util=89.36% 00:09:46.526 nvme0n4: ios=3584/4052, merge=0/0, ticks=16560/16109, in_queue=32669, util=89.82% 00:09:46.526 09:24:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:46.785 [global] 00:09:46.785 thread=1 00:09:46.785 invalidate=1 00:09:46.785 rw=randwrite 00:09:46.785 time_based=1 00:09:46.785 runtime=1 00:09:46.785 ioengine=libaio 00:09:46.785 direct=1 00:09:46.785 bs=4096 00:09:46.785 iodepth=128 00:09:46.785 norandommap=0 00:09:46.785 numjobs=1 00:09:46.785 00:09:46.785 verify_dump=1 00:09:46.785 verify_backlog=512 00:09:46.785 verify_state_save=0 00:09:46.785 do_verify=1 00:09:46.785 verify=crc32c-intel 00:09:46.785 [job0] 00:09:46.785 filename=/dev/nvme0n1 00:09:46.785 [job1] 00:09:46.785 filename=/dev/nvme0n2 00:09:46.785 [job2] 00:09:46.785 filename=/dev/nvme0n3 00:09:46.785 [job3] 00:09:46.785 filename=/dev/nvme0n4 00:09:46.785 Could not set queue depth (nvme0n1) 00:09:46.785 Could not set queue depth (nvme0n2) 00:09:46.785 Could not set queue depth (nvme0n3) 00:09:46.785 Could not set queue depth (nvme0n4) 00:09:46.785 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:46.785 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:46.785 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:46.785 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:46.785 fio-3.35 00:09:46.785 Starting 4 threads 00:09:48.187 00:09:48.187 job0: (groupid=0, jobs=1): err= 0: pid=66549: Wed Oct 16 09:24:12 2024 00:09:48.187 read: IOPS=5016, BW=19.6MiB/s (20.5MB/s)(19.6MiB/1002msec) 00:09:48.187 slat (usec): min=4, max=3497, avg=96.78, stdev=376.43 00:09:48.187 clat (usec): min=651, max=16370, avg=12606.21, stdev=1309.06 00:09:48.187 lat (usec): min=2482, max=16419, avg=12702.98, stdev=1340.93 00:09:48.187 clat percentiles (usec): 00:09:48.187 | 1.00th=[ 6980], 5.00th=[10683], 10.00th=[11469], 20.00th=[12256], 00:09:48.187 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12649], 60.00th=[12780], 00:09:48.187 | 70.00th=[12911], 80.00th=[13042], 90.00th=[14091], 95.00th=[14484], 00:09:48.187 | 99.00th=[15139], 99.50th=[15401], 99.90th=[15926], 99.95th=[16188], 00:09:48.187 | 99.99th=[16319] 00:09:48.188 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:09:48.188 slat (usec): min=12, max=3339, avg=92.05, stdev=381.30 00:09:48.188 clat (usec): min=8989, max=15999, avg=12360.33, stdev=999.28 00:09:48.188 lat (usec): min=9010, max=16016, avg=12452.38, stdev=1055.36 00:09:48.188 clat percentiles (usec): 00:09:48.188 | 1.00th=[10028], 5.00th=[11207], 10.00th=[11469], 20.00th=[11731], 00:09:48.188 | 30.00th=[11863], 40.00th=[11994], 50.00th=[12125], 60.00th=[12387], 00:09:48.188 | 70.00th=[12649], 80.00th=[13042], 90.00th=[13566], 95.00th=[14353], 00:09:48.188 | 99.00th=[15401], 99.50th=[15664], 99.90th=[15926], 99.95th=[15926], 00:09:48.188 | 99.99th=[16057] 00:09:48.188 bw ( KiB/s): min=20398, max=20398, per=26.37%, avg=20398.00, stdev= 0.00, samples=1 00:09:48.188 iops : min= 5099, max= 5099, avg=5099.00, stdev= 0.00, samples=1 00:09:48.188 lat (usec) : 750=0.01% 00:09:48.188 lat (msec) : 4=0.20%, 10=1.48%, 20=98.31% 00:09:48.188 cpu : usr=5.09%, sys=14.49%, ctx=513, majf=0, minf=15 00:09:48.188 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:48.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.188 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:48.188 issued rwts: total=5027,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:48.188 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:48.188 job1: (groupid=0, jobs=1): err= 0: pid=66550: Wed Oct 16 09:24:12 2024 00:09:48.188 read: IOPS=5087, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1005msec) 00:09:48.188 slat (usec): min=4, max=6555, avg=93.51, stdev=577.02 00:09:48.188 clat (usec): min=1546, max=21077, avg=12989.94, stdev=1580.02 00:09:48.188 lat (usec): min=5329, max=25103, avg=13083.45, stdev=1601.87 00:09:48.188 clat percentiles (usec): 00:09:48.188 | 1.00th=[ 7898], 5.00th=[10421], 10.00th=[12125], 20.00th=[12518], 00:09:48.188 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13042], 60.00th=[13173], 00:09:48.188 | 70.00th=[13435], 80.00th=[13698], 90.00th=[13960], 95.00th=[14484], 00:09:48.188 | 99.00th=[19530], 99.50th=[20055], 99.90th=[21103], 99.95th=[21103], 00:09:48.188 | 99.99th=[21103] 00:09:48.188 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:09:48.188 slat (usec): min=10, max=7365, avg=94.42, stdev=547.93 00:09:48.188 clat (usec): min=6356, max=16287, avg=11890.68, stdev=1106.35 00:09:48.188 lat (usec): min=8045, max=16461, avg=11985.10, stdev=989.97 00:09:48.188 clat percentiles (usec): 00:09:48.188 | 1.00th=[ 7898], 5.00th=[10290], 10.00th=[10683], 20.00th=[11207], 00:09:48.188 | 30.00th=[11600], 40.00th=[11863], 50.00th=[11994], 60.00th=[12125], 00:09:48.188 | 70.00th=[12256], 80.00th=[12518], 90.00th=[12911], 95.00th=[13566], 00:09:48.188 | 99.00th=[15008], 99.50th=[15270], 99.90th=[16319], 99.95th=[16319], 00:09:48.188 | 99.99th=[16319] 00:09:48.188 bw ( KiB/s): min=20439, max=20480, per=26.45%, avg=20459.50, stdev=28.99, samples=2 00:09:48.188 iops : min= 5109, max= 5120, avg=5114.50, stdev= 7.78, samples=2 00:09:48.188 lat (msec) : 2=0.01%, 10=3.89%, 20=95.78%, 50=0.32% 00:09:48.188 cpu : usr=4.38%, sys=14.54%, ctx=218, majf=0, minf=9 00:09:48.188 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:48.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.188 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:48.188 issued rwts: total=5113,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:48.188 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:48.188 job2: (groupid=0, jobs=1): err= 0: pid=66551: Wed Oct 16 09:24:12 2024 00:09:48.188 read: IOPS=4374, BW=17.1MiB/s (17.9MB/s)(17.1MiB/1002msec) 00:09:48.188 slat (usec): min=7, max=6379, avg=112.95, stdev=521.08 00:09:48.188 clat (usec): min=820, max=20676, avg=14450.59, stdev=1771.81 00:09:48.188 lat (usec): min=2756, max=24267, avg=14563.54, stdev=1782.52 00:09:48.188 clat percentiles (usec): 00:09:48.188 | 1.00th=[ 7898], 5.00th=[11731], 10.00th=[13042], 20.00th=[13698], 00:09:48.188 | 30.00th=[14222], 40.00th=[14484], 50.00th=[14615], 60.00th=[14746], 00:09:48.188 | 70.00th=[15008], 80.00th=[15139], 90.00th=[15664], 95.00th=[17433], 00:09:48.188 | 99.00th=[19006], 99.50th=[19530], 99.90th=[20579], 99.95th=[20579], 00:09:48.188 | 99.99th=[20579] 00:09:48.188 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:09:48.188 slat (usec): min=10, max=6997, avg=101.49, stdev=585.10 00:09:48.188 clat (usec): min=6038, max=20552, avg=13728.96, stdev=1520.49 00:09:48.188 lat (usec): min=6061, max=21311, avg=13830.44, stdev=1613.94 00:09:48.188 clat percentiles (usec): 00:09:48.188 | 1.00th=[ 9765], 5.00th=[11469], 10.00th=[12256], 20.00th=[12911], 00:09:48.188 | 30.00th=[13304], 40.00th=[13435], 50.00th=[13698], 60.00th=[13829], 00:09:48.188 | 70.00th=[13960], 80.00th=[14353], 90.00th=[15270], 95.00th=[16581], 00:09:48.188 | 99.00th=[19006], 99.50th=[19530], 99.90th=[20579], 99.95th=[20579], 00:09:48.188 | 99.99th=[20579] 00:09:48.188 bw ( KiB/s): min=18916, max=18916, per=24.45%, avg=18916.00, stdev= 0.00, samples=1 00:09:48.188 iops : min= 4729, max= 4729, avg=4729.00, stdev= 0.00, samples=1 00:09:48.188 lat (usec) : 1000=0.01% 00:09:48.188 lat (msec) : 4=0.20%, 10=1.75%, 20=97.80%, 50=0.24% 00:09:48.188 cpu : usr=4.30%, sys=12.79%, ctx=317, majf=0, minf=9 00:09:48.188 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:48.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.188 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:48.188 issued rwts: total=4383,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:48.188 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:48.188 job3: (groupid=0, jobs=1): err= 0: pid=66552: Wed Oct 16 09:24:12 2024 00:09:48.188 read: IOPS=4525, BW=17.7MiB/s (18.5MB/s)(17.8MiB/1006msec) 00:09:48.188 slat (usec): min=7, max=6274, avg=104.84, stdev=647.09 00:09:48.188 clat (usec): min=2207, max=22474, avg=14435.05, stdev=1736.20 00:09:48.188 lat (usec): min=6791, max=26982, avg=14539.89, stdev=1726.73 00:09:48.188 clat percentiles (usec): 00:09:48.188 | 1.00th=[ 7570], 5.00th=[10159], 10.00th=[13566], 20.00th=[14091], 00:09:48.188 | 30.00th=[14353], 40.00th=[14484], 50.00th=[14615], 60.00th=[14746], 00:09:48.188 | 70.00th=[15008], 80.00th=[15008], 90.00th=[15270], 95.00th=[15533], 00:09:48.188 | 99.00th=[21890], 99.50th=[22152], 99.90th=[22414], 99.95th=[22414], 00:09:48.188 | 99.99th=[22414] 00:09:48.188 write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec); 0 zone resets 00:09:48.188 slat (usec): min=7, max=10356, avg=105.95, stdev=641.52 00:09:48.188 clat (usec): min=6781, max=20959, avg=13382.79, stdev=1542.54 00:09:48.188 lat (usec): min=9420, max=21001, avg=13488.75, stdev=1443.79 00:09:48.188 clat percentiles (usec): 00:09:48.188 | 1.00th=[ 8717], 5.00th=[11600], 10.00th=[11994], 20.00th=[12518], 00:09:48.188 | 30.00th=[12911], 40.00th=[13173], 50.00th=[13435], 60.00th=[13566], 00:09:48.188 | 70.00th=[13698], 80.00th=[13960], 90.00th=[14353], 95.00th=[15139], 00:09:48.188 | 99.00th=[19792], 99.50th=[20055], 99.90th=[20317], 99.95th=[20317], 00:09:48.188 | 99.99th=[20841] 00:09:48.188 bw ( KiB/s): min=17432, max=19393, per=23.80%, avg=18412.50, stdev=1386.64, samples=2 00:09:48.188 iops : min= 4358, max= 4848, avg=4603.00, stdev=346.48, samples=2 00:09:48.188 lat (msec) : 4=0.01%, 10=3.49%, 20=95.37%, 50=1.12% 00:09:48.188 cpu : usr=4.18%, sys=13.33%, ctx=196, majf=0, minf=12 00:09:48.188 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:48.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.188 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:48.188 issued rwts: total=4553,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:48.188 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:48.188 00:09:48.188 Run status group 0 (all jobs): 00:09:48.188 READ: bw=74.1MiB/s (77.7MB/s), 17.1MiB/s-19.9MiB/s (17.9MB/s-20.8MB/s), io=74.5MiB (78.1MB), run=1002-1006msec 00:09:48.188 WRITE: bw=75.5MiB/s (79.2MB/s), 17.9MiB/s-20.0MiB/s (18.8MB/s-20.9MB/s), io=76.0MiB (79.7MB), run=1002-1006msec 00:09:48.188 00:09:48.188 Disk stats (read/write): 00:09:48.188 nvme0n1: ios=4188/4608, merge=0/0, ticks=16731/16263, in_queue=32994, util=88.16% 00:09:48.188 nvme0n2: ios=4213/4608, merge=0/0, ticks=51070/50304, in_queue=101374, util=88.77% 00:09:48.188 nvme0n3: ios=3616/4096, merge=0/0, ticks=25701/23849, in_queue=49550, util=89.28% 00:09:48.188 nvme0n4: ios=3710/4096, merge=0/0, ticks=50914/50716, in_queue=101630, util=89.83% 00:09:48.188 09:24:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:48.188 09:24:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=66569 00:09:48.188 09:24:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:48.188 09:24:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:48.188 [global] 00:09:48.188 thread=1 00:09:48.188 invalidate=1 00:09:48.188 rw=read 00:09:48.188 time_based=1 00:09:48.188 runtime=10 00:09:48.188 ioengine=libaio 00:09:48.188 direct=1 00:09:48.188 bs=4096 00:09:48.188 iodepth=1 00:09:48.188 norandommap=1 00:09:48.188 numjobs=1 00:09:48.188 00:09:48.188 [job0] 00:09:48.188 filename=/dev/nvme0n1 00:09:48.188 [job1] 00:09:48.188 filename=/dev/nvme0n2 00:09:48.188 [job2] 00:09:48.188 filename=/dev/nvme0n3 00:09:48.188 [job3] 00:09:48.188 filename=/dev/nvme0n4 00:09:48.188 Could not set queue depth (nvme0n1) 00:09:48.188 Could not set queue depth (nvme0n2) 00:09:48.188 Could not set queue depth (nvme0n3) 00:09:48.188 Could not set queue depth (nvme0n4) 00:09:48.188 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:48.188 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:48.188 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:48.188 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:48.188 fio-3.35 00:09:48.188 Starting 4 threads 00:09:51.474 09:24:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:51.474 fio: pid=66613, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:51.474 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=58318848, buflen=4096 00:09:51.474 09:24:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:51.474 fio: pid=66612, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:51.474 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=61399040, buflen=4096 00:09:51.474 09:24:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:51.474 09:24:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:52.041 fio: pid=66610, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:52.041 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=5832704, buflen=4096 00:09:52.041 09:24:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:52.041 09:24:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:52.041 fio: pid=66611, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:52.041 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=9125888, buflen=4096 00:09:52.300 00:09:52.300 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66610: Wed Oct 16 09:24:16 2024 00:09:52.300 read: IOPS=5095, BW=19.9MiB/s (20.9MB/s)(69.6MiB/3495msec) 00:09:52.300 slat (usec): min=7, max=9853, avg=14.75, stdev=125.48 00:09:52.300 clat (usec): min=116, max=2466, avg=180.32, stdev=51.49 00:09:52.300 lat (usec): min=130, max=10073, avg=195.07, stdev=140.65 00:09:52.300 clat percentiles (usec): 00:09:52.300 | 1.00th=[ 130], 5.00th=[ 139], 10.00th=[ 145], 20.00th=[ 151], 00:09:52.300 | 30.00th=[ 155], 40.00th=[ 161], 50.00th=[ 167], 60.00th=[ 174], 00:09:52.300 | 70.00th=[ 186], 80.00th=[ 219], 90.00th=[ 241], 95.00th=[ 253], 00:09:52.300 | 99.00th=[ 277], 99.50th=[ 285], 99.90th=[ 343], 99.95th=[ 1037], 00:09:52.300 | 99.99th=[ 2114] 00:09:52.300 bw ( KiB/s): min=16128, max=22800, per=28.55%, avg=19920.00, stdev=3015.42, samples=6 00:09:52.300 iops : min= 4032, max= 5700, avg=4980.00, stdev=753.85, samples=6 00:09:52.300 lat (usec) : 250=93.82%, 500=6.09%, 750=0.02% 00:09:52.300 lat (msec) : 2=0.04%, 4=0.02% 00:09:52.300 cpu : usr=1.43%, sys=5.67%, ctx=17820, majf=0, minf=1 00:09:52.300 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:52.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.300 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.300 issued rwts: total=17809,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.300 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:52.300 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66611: Wed Oct 16 09:24:16 2024 00:09:52.300 read: IOPS=4946, BW=19.3MiB/s (20.3MB/s)(72.7MiB/3763msec) 00:09:52.300 slat (usec): min=7, max=9481, avg=15.87, stdev=154.29 00:09:52.300 clat (usec): min=118, max=6655, avg=185.03, stdev=135.08 00:09:52.300 lat (usec): min=132, max=9689, avg=200.90, stdev=205.63 00:09:52.300 clat percentiles (usec): 00:09:52.300 | 1.00th=[ 128], 5.00th=[ 137], 10.00th=[ 143], 20.00th=[ 149], 00:09:52.300 | 30.00th=[ 153], 40.00th=[ 159], 50.00th=[ 165], 60.00th=[ 174], 00:09:52.300 | 70.00th=[ 192], 80.00th=[ 227], 90.00th=[ 247], 95.00th=[ 260], 00:09:52.300 | 99.00th=[ 289], 99.50th=[ 306], 99.90th=[ 1336], 99.95th=[ 3752], 00:09:52.300 | 99.99th=[ 6390] 00:09:52.300 bw ( KiB/s): min=15328, max=23056, per=28.04%, avg=19569.57, stdev=3089.23, samples=7 00:09:52.300 iops : min= 3832, max= 5764, avg=4892.29, stdev=772.30, samples=7 00:09:52.300 lat (usec) : 250=91.65%, 500=8.18%, 750=0.05%, 1000=0.01% 00:09:52.300 lat (msec) : 2=0.02%, 4=0.05%, 10=0.04% 00:09:52.300 cpu : usr=1.44%, sys=5.64%, ctx=18620, majf=0, minf=2 00:09:52.300 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:52.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.300 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.300 issued rwts: total=18613,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.300 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:52.300 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66612: Wed Oct 16 09:24:16 2024 00:09:52.300 read: IOPS=4705, BW=18.4MiB/s (19.3MB/s)(58.6MiB/3186msec) 00:09:52.300 slat (usec): min=7, max=7774, avg=14.29, stdev=87.57 00:09:52.300 clat (usec): min=139, max=1500, avg=196.94, stdev=40.86 00:09:52.300 lat (usec): min=152, max=8019, avg=211.22, stdev=96.65 00:09:52.300 clat percentiles (usec): 00:09:52.300 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 165], 00:09:52.300 | 30.00th=[ 172], 40.00th=[ 178], 50.00th=[ 186], 60.00th=[ 196], 00:09:52.300 | 70.00th=[ 215], 80.00th=[ 231], 90.00th=[ 249], 95.00th=[ 265], 00:09:52.300 | 99.00th=[ 293], 99.50th=[ 306], 99.90th=[ 420], 99.95th=[ 611], 00:09:52.300 | 99.99th=[ 1418] 00:09:52.300 bw ( KiB/s): min=15328, max=21096, per=27.28%, avg=19036.00, stdev=2709.89, samples=6 00:09:52.300 iops : min= 3832, max= 5274, avg=4759.00, stdev=677.47, samples=6 00:09:52.300 lat (usec) : 250=90.17%, 500=9.74%, 750=0.05%, 1000=0.01% 00:09:52.300 lat (msec) : 2=0.02% 00:09:52.300 cpu : usr=1.35%, sys=5.34%, ctx=14998, majf=0, minf=2 00:09:52.300 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:52.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.300 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.300 issued rwts: total=14991,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.300 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:52.300 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66613: Wed Oct 16 09:24:16 2024 00:09:52.300 read: IOPS=4874, BW=19.0MiB/s (20.0MB/s)(55.6MiB/2921msec) 00:09:52.300 slat (nsec): min=7437, max=70062, avg=13288.83, stdev=3804.14 00:09:52.300 clat (usec): min=136, max=2166, avg=190.61, stdev=35.76 00:09:52.300 lat (usec): min=147, max=2179, avg=203.89, stdev=35.23 00:09:52.300 clat percentiles (usec): 00:09:52.300 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 165], 00:09:52.300 | 30.00th=[ 169], 40.00th=[ 176], 50.00th=[ 182], 60.00th=[ 188], 00:09:52.300 | 70.00th=[ 200], 80.00th=[ 219], 90.00th=[ 239], 95.00th=[ 251], 00:09:52.300 | 99.00th=[ 277], 99.50th=[ 285], 99.90th=[ 310], 99.95th=[ 326], 00:09:52.300 | 99.99th=[ 758] 00:09:52.300 bw ( KiB/s): min=16448, max=21328, per=28.81%, avg=20105.60, stdev=2075.98, samples=5 00:09:52.300 iops : min= 4112, max= 5332, avg=5026.40, stdev=519.00, samples=5 00:09:52.300 lat (usec) : 250=94.34%, 500=5.63%, 750=0.01%, 1000=0.01% 00:09:52.300 lat (msec) : 4=0.01% 00:09:52.300 cpu : usr=1.40%, sys=5.86%, ctx=14241, majf=0, minf=2 00:09:52.300 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:52.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.300 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.300 issued rwts: total=14239,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.300 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:52.300 00:09:52.300 Run status group 0 (all jobs): 00:09:52.300 READ: bw=68.1MiB/s (71.5MB/s), 18.4MiB/s-19.9MiB/s (19.3MB/s-20.9MB/s), io=256MiB (269MB), run=2921-3763msec 00:09:52.300 00:09:52.300 Disk stats (read/write): 00:09:52.300 nvme0n1: ios=17027/0, merge=0/0, ticks=3055/0, in_queue=3055, util=95.45% 00:09:52.300 nvme0n2: ios=17686/0, merge=0/0, ticks=3312/0, in_queue=3312, util=95.13% 00:09:52.300 nvme0n3: ios=14722/0, merge=0/0, ticks=2837/0, in_queue=2837, util=96.40% 00:09:52.300 nvme0n4: ios=14032/0, merge=0/0, ticks=2662/0, in_queue=2662, util=96.73% 00:09:52.300 09:24:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:52.300 09:24:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:52.559 09:24:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:52.559 09:24:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:52.817 09:24:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:52.817 09:24:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:53.074 09:24:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:53.074 09:24:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:53.331 09:24:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:53.331 09:24:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:53.590 09:24:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:53.590 09:24:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 66569 00:09:53.590 09:24:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:53.590 09:24:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:53.590 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.590 09:24:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:53.590 09:24:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:09:53.590 09:24:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:53.590 09:24:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:53.590 09:24:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:53.590 09:24:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:53.590 09:24:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:09:53.590 nvmf hotplug test: fio failed as expected 00:09:53.590 09:24:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:53.590 09:24:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:53.590 09:24:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:53.849 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:53.849 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:53.849 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:53.849 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:53.849 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:53.849 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:53.849 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:53.849 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:53.849 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:53.849 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:53.849 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:53.849 rmmod nvme_tcp 00:09:53.849 rmmod nvme_fabrics 00:09:53.849 rmmod nvme_keyring 00:09:53.849 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:53.849 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:53.849 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:53.849 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 66190 ']' 00:09:53.849 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 66190 00:09:53.849 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 66190 ']' 00:09:53.849 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 66190 00:09:53.849 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:09:53.849 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:53.849 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66190 00:09:53.849 killing process with pid 66190 00:09:53.849 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:53.849 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:53.849 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66190' 00:09:53.849 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 66190 00:09:53.849 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 66190 00:09:54.108 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:54.108 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:54.108 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:54.108 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:54.108 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:09:54.108 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:54.108 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:09:54.108 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:54.108 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:54.108 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:54.108 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:54.108 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:54.108 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:54.108 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:54.108 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:54.108 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:54.108 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:54.108 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:54.108 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:54.108 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:54.108 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:54.366 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:54.366 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:54.367 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:54.367 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:54.367 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:54.367 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:09:54.367 00:09:54.367 real 0m19.088s 00:09:54.367 user 1m10.505s 00:09:54.367 sys 0m10.946s 00:09:54.367 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:54.367 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:54.367 ************************************ 00:09:54.367 END TEST nvmf_fio_target 00:09:54.367 ************************************ 00:09:54.367 09:24:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:54.367 09:24:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:54.367 09:24:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:54.367 09:24:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:54.367 ************************************ 00:09:54.367 START TEST nvmf_bdevio 00:09:54.367 ************************************ 00:09:54.367 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:54.367 * Looking for test storage... 00:09:54.367 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:54.367 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:54.367 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:09:54.367 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:54.626 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:54.626 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:54.626 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:54.626 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:54.626 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:54.626 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:54.626 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:54.626 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:54.626 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:54.626 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:54.626 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:54.626 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:54.626 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:54.626 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:54.626 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:54.626 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:54.626 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:54.626 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:54.626 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:54.626 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:54.626 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:54.626 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:54.626 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:54.626 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:54.626 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:54.626 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:54.626 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:54.626 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:54.626 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:54.626 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:54.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.627 --rc genhtml_branch_coverage=1 00:09:54.627 --rc genhtml_function_coverage=1 00:09:54.627 --rc genhtml_legend=1 00:09:54.627 --rc geninfo_all_blocks=1 00:09:54.627 --rc geninfo_unexecuted_blocks=1 00:09:54.627 00:09:54.627 ' 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:54.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.627 --rc genhtml_branch_coverage=1 00:09:54.627 --rc genhtml_function_coverage=1 00:09:54.627 --rc genhtml_legend=1 00:09:54.627 --rc geninfo_all_blocks=1 00:09:54.627 --rc geninfo_unexecuted_blocks=1 00:09:54.627 00:09:54.627 ' 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:54.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.627 --rc genhtml_branch_coverage=1 00:09:54.627 --rc genhtml_function_coverage=1 00:09:54.627 --rc genhtml_legend=1 00:09:54.627 --rc geninfo_all_blocks=1 00:09:54.627 --rc geninfo_unexecuted_blocks=1 00:09:54.627 00:09:54.627 ' 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:54.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.627 --rc genhtml_branch_coverage=1 00:09:54.627 --rc genhtml_function_coverage=1 00:09:54.627 --rc genhtml_legend=1 00:09:54.627 --rc geninfo_all_blocks=1 00:09:54.627 --rc geninfo_unexecuted_blocks=1 00:09:54.627 00:09:54.627 ' 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5989d9e2-d339-420e-a2f4-bd87604f111f 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:54.627 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@458 -- # nvmf_veth_init 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:54.627 Cannot find device "nvmf_init_br" 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:54.627 Cannot find device "nvmf_init_br2" 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:54.627 Cannot find device "nvmf_tgt_br" 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:09:54.627 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:54.627 Cannot find device "nvmf_tgt_br2" 00:09:54.628 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:09:54.628 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:54.628 Cannot find device "nvmf_init_br" 00:09:54.628 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:09:54.628 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:54.628 Cannot find device "nvmf_init_br2" 00:09:54.628 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:09:54.628 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:54.628 Cannot find device "nvmf_tgt_br" 00:09:54.628 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:09:54.628 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:54.628 Cannot find device "nvmf_tgt_br2" 00:09:54.628 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:09:54.628 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:54.628 Cannot find device "nvmf_br" 00:09:54.628 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:09:54.628 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:54.628 Cannot find device "nvmf_init_if" 00:09:54.628 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:09:54.628 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:54.628 Cannot find device "nvmf_init_if2" 00:09:54.628 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:09:54.628 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:54.628 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:54.628 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:09:54.628 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:54.628 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:54.628 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:09:54.628 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:54.628 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:54.628 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:54.628 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:54.628 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:54.887 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:54.887 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:54.887 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:54.887 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:54.887 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:54.887 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:54.887 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:54.887 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:54.887 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:54.887 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:54.887 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:54.887 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:54.887 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:54.887 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:54.887 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:54.887 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:54.887 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:54.887 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:54.887 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:54.887 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:54.887 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:54.887 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:54.887 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:54.887 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:54.887 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:54.887 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:54.887 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:54.887 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:54.887 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:54.887 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:09:54.887 00:09:54.887 --- 10.0.0.3 ping statistics --- 00:09:54.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.887 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:09:54.887 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:54.887 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:54.887 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:09:54.887 00:09:54.887 --- 10.0.0.4 ping statistics --- 00:09:54.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.887 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:09:54.887 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:54.887 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:54.887 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:09:54.887 00:09:54.887 --- 10.0.0.1 ping statistics --- 00:09:54.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.887 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:09:54.887 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:54.887 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:54.887 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:09:54.887 00:09:54.887 --- 10.0.0.2 ping statistics --- 00:09:54.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.887 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:09:54.887 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:54.887 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # return 0 00:09:54.887 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:54.887 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:54.887 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:54.887 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:54.887 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:54.887 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:54.887 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:54.887 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:54.887 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:54.887 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:54.887 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:54.887 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=66931 00:09:54.887 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:54.887 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 66931 00:09:54.887 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 66931 ']' 00:09:54.887 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.887 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:54.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.887 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.887 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:54.887 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:55.146 [2024-10-16 09:24:19.327103] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:09:55.146 [2024-10-16 09:24:19.327192] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:55.146 [2024-10-16 09:24:19.466288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:55.146 [2024-10-16 09:24:19.509485] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:55.146 [2024-10-16 09:24:19.509588] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:55.146 [2024-10-16 09:24:19.509600] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:55.146 [2024-10-16 09:24:19.509608] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:55.146 [2024-10-16 09:24:19.509615] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:55.146 [2024-10-16 09:24:19.511028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:55.146 [2024-10-16 09:24:19.514588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:55.146 [2024-10-16 09:24:19.514738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:55.146 [2024-10-16 09:24:19.514744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:55.405 [2024-10-16 09:24:19.568491] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:55.405 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:55.405 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:09:55.405 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:55.405 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:55.405 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:55.405 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:55.405 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:55.405 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.405 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:55.405 [2024-10-16 09:24:19.683437] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:55.405 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.405 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:55.405 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.405 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:55.405 Malloc0 00:09:55.405 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.405 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:55.405 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.405 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:55.405 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.405 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:55.405 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.405 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:55.405 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.405 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:55.405 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.405 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:55.405 [2024-10-16 09:24:19.754786] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:55.405 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.405 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:55.405 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:55.405 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:09:55.405 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:09:55.405 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:55.405 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:55.405 { 00:09:55.405 "params": { 00:09:55.405 "name": "Nvme$subsystem", 00:09:55.405 "trtype": "$TEST_TRANSPORT", 00:09:55.405 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:55.405 "adrfam": "ipv4", 00:09:55.405 "trsvcid": "$NVMF_PORT", 00:09:55.405 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:55.406 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:55.406 "hdgst": ${hdgst:-false}, 00:09:55.406 "ddgst": ${ddgst:-false} 00:09:55.406 }, 00:09:55.406 "method": "bdev_nvme_attach_controller" 00:09:55.406 } 00:09:55.406 EOF 00:09:55.406 )") 00:09:55.406 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:09:55.406 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:09:55.406 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:09:55.406 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:55.406 "params": { 00:09:55.406 "name": "Nvme1", 00:09:55.406 "trtype": "tcp", 00:09:55.406 "traddr": "10.0.0.3", 00:09:55.406 "adrfam": "ipv4", 00:09:55.406 "trsvcid": "4420", 00:09:55.406 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:55.406 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:55.406 "hdgst": false, 00:09:55.406 "ddgst": false 00:09:55.406 }, 00:09:55.406 "method": "bdev_nvme_attach_controller" 00:09:55.406 }' 00:09:55.664 [2024-10-16 09:24:19.818832] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:09:55.664 [2024-10-16 09:24:19.818941] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66954 ] 00:09:55.664 [2024-10-16 09:24:19.961906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:55.664 [2024-10-16 09:24:20.017150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:55.664 [2024-10-16 09:24:20.017298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.664 [2024-10-16 09:24:20.017300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:55.922 [2024-10-16 09:24:20.083471] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:55.922 I/O targets: 00:09:55.922 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:55.922 00:09:55.922 00:09:55.922 CUnit - A unit testing framework for C - Version 2.1-3 00:09:55.922 http://cunit.sourceforge.net/ 00:09:55.922 00:09:55.922 00:09:55.922 Suite: bdevio tests on: Nvme1n1 00:09:55.922 Test: blockdev write read block ...passed 00:09:55.922 Test: blockdev write zeroes read block ...passed 00:09:55.922 Test: blockdev write zeroes read no split ...passed 00:09:55.922 Test: blockdev write zeroes read split ...passed 00:09:55.922 Test: blockdev write zeroes read split partial ...passed 00:09:55.922 Test: blockdev reset ...[2024-10-16 09:24:20.235553] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:09:55.922 [2024-10-16 09:24:20.235672] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1344040 (9): Bad file descriptor 00:09:55.922 [2024-10-16 09:24:20.251114] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:55.922 passed 00:09:55.922 Test: blockdev write read 8 blocks ...passed 00:09:55.922 Test: blockdev write read size > 128k ...passed 00:09:55.922 Test: blockdev write read invalid size ...passed 00:09:55.922 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:55.922 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:55.922 Test: blockdev write read max offset ...passed 00:09:55.922 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:55.922 Test: blockdev writev readv 8 blocks ...passed 00:09:55.922 Test: blockdev writev readv 30 x 1block ...passed 00:09:55.922 Test: blockdev writev readv block ...passed 00:09:55.922 Test: blockdev writev readv size > 128k ...passed 00:09:55.922 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:55.922 Test: blockdev comparev and writev ...[2024-10-16 09:24:20.261291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:55.922 [2024-10-16 09:24:20.261732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:55.922 [2024-10-16 09:24:20.261878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:55.922 [2024-10-16 09:24:20.261979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:55.922 [2024-10-16 09:24:20.262383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:55.922 [2024-10-16 09:24:20.262647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:55.922 [2024-10-16 09:24:20.262914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:55.922 [2024-10-16 09:24:20.263246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:55.922 [2024-10-16 09:24:20.263686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:55.922 [2024-10-16 09:24:20.263919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:55.922 [2024-10-16 09:24:20.264198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:55.922 [2024-10-16 09:24:20.264418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:55.922 [2024-10-16 09:24:20.264838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:55.922 [2024-10-16 09:24:20.265072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:55.922 [2024-10-16 09:24:20.265345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:55.922 passed 00:09:55.922 Test: blockdev nvme passthru rw ...[2024-10-16 09:24:20.265682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:55.922 passed 00:09:55.922 Test: blockdev nvme passthru vendor specific ...[2024-10-16 09:24:20.266729] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:55.922 [2024-10-16 09:24:20.266870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:55.922 [2024-10-16 09:24:20.267114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:55.922 [2024-10-16 09:24:20.267237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:55.922 [2024-10-16 09:24:20.267438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:55.922 [2024-10-16 09:24:20.267584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:55.922 [2024-10-16 09:24:20.267801] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:55.922 [2024-10-16 09:24:20.267901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:55.922 passed 00:09:55.922 Test: blockdev nvme admin passthru ...passed 00:09:55.922 Test: blockdev copy ...passed 00:09:55.922 00:09:55.922 Run Summary: Type Total Ran Passed Failed Inactive 00:09:55.922 suites 1 1 n/a 0 0 00:09:55.922 tests 23 23 23 0 0 00:09:55.922 asserts 152 152 152 0 n/a 00:09:55.922 00:09:55.922 Elapsed time = 0.166 seconds 00:09:56.181 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:56.181 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.181 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:56.181 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.181 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:56.181 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:56.181 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:56.181 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:56.181 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:56.181 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:56.181 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:56.181 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:56.181 rmmod nvme_tcp 00:09:56.181 rmmod nvme_fabrics 00:09:56.181 rmmod nvme_keyring 00:09:56.181 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:56.181 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:56.181 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:56.181 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 66931 ']' 00:09:56.181 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 66931 00:09:56.181 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 66931 ']' 00:09:56.181 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 66931 00:09:56.181 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:09:56.181 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:56.181 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66931 00:09:56.439 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:09:56.439 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:09:56.439 killing process with pid 66931 00:09:56.439 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66931' 00:09:56.439 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 66931 00:09:56.439 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 66931 00:09:56.439 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:56.439 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:56.439 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:56.439 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:56.439 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:09:56.439 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:56.439 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:09:56.439 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:56.440 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:56.440 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:56.440 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:56.698 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:56.698 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:56.698 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:56.698 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:56.699 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:56.699 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:56.699 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:56.699 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:56.699 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:56.699 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:56.699 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:56.699 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:56.699 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:56.699 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:56.699 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:56.699 09:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:09:56.699 ************************************ 00:09:56.699 END TEST nvmf_bdevio 00:09:56.699 ************************************ 00:09:56.699 00:09:56.699 real 0m2.394s 00:09:56.699 user 0m6.416s 00:09:56.699 sys 0m0.831s 00:09:56.699 09:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:56.699 09:24:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:56.699 09:24:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:56.699 00:09:56.699 real 2m30.503s 00:09:56.699 user 6m31.688s 00:09:56.699 sys 0m54.358s 00:09:56.699 09:24:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:56.699 09:24:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:56.699 ************************************ 00:09:56.699 END TEST nvmf_target_core 00:09:56.699 ************************************ 00:09:56.699 09:24:21 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:56.699 09:24:21 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:56.699 09:24:21 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:56.699 09:24:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:56.959 ************************************ 00:09:56.959 START TEST nvmf_target_extra 00:09:56.959 ************************************ 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:56.959 * Looking for test storage... 00:09:56.959 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:56.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.959 --rc genhtml_branch_coverage=1 00:09:56.959 --rc genhtml_function_coverage=1 00:09:56.959 --rc genhtml_legend=1 00:09:56.959 --rc geninfo_all_blocks=1 00:09:56.959 --rc geninfo_unexecuted_blocks=1 00:09:56.959 00:09:56.959 ' 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:56.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.959 --rc genhtml_branch_coverage=1 00:09:56.959 --rc genhtml_function_coverage=1 00:09:56.959 --rc genhtml_legend=1 00:09:56.959 --rc geninfo_all_blocks=1 00:09:56.959 --rc geninfo_unexecuted_blocks=1 00:09:56.959 00:09:56.959 ' 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:56.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.959 --rc genhtml_branch_coverage=1 00:09:56.959 --rc genhtml_function_coverage=1 00:09:56.959 --rc genhtml_legend=1 00:09:56.959 --rc geninfo_all_blocks=1 00:09:56.959 --rc geninfo_unexecuted_blocks=1 00:09:56.959 00:09:56.959 ' 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:56.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.959 --rc genhtml_branch_coverage=1 00:09:56.959 --rc genhtml_function_coverage=1 00:09:56.959 --rc genhtml_legend=1 00:09:56.959 --rc geninfo_all_blocks=1 00:09:56.959 --rc geninfo_unexecuted_blocks=1 00:09:56.959 00:09:56.959 ' 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5989d9e2-d339-420e-a2f4-bd87604f111f 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:56.959 09:24:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:56.960 09:24:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:56.960 09:24:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:56.960 09:24:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:56.960 09:24:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:56.960 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:56.960 09:24:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:56.960 09:24:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:56.960 09:24:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:56.960 09:24:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:56.960 09:24:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:56.960 09:24:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:09:56.960 09:24:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:09:56.960 09:24:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:56.960 09:24:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:56.960 09:24:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:56.960 ************************************ 00:09:56.960 START TEST nvmf_auth_target 00:09:56.960 ************************************ 00:09:56.960 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:09:57.220 * Looking for test storage... 00:09:57.220 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:57.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.220 --rc genhtml_branch_coverage=1 00:09:57.220 --rc genhtml_function_coverage=1 00:09:57.220 --rc genhtml_legend=1 00:09:57.220 --rc geninfo_all_blocks=1 00:09:57.220 --rc geninfo_unexecuted_blocks=1 00:09:57.220 00:09:57.220 ' 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:57.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.220 --rc genhtml_branch_coverage=1 00:09:57.220 --rc genhtml_function_coverage=1 00:09:57.220 --rc genhtml_legend=1 00:09:57.220 --rc geninfo_all_blocks=1 00:09:57.220 --rc geninfo_unexecuted_blocks=1 00:09:57.220 00:09:57.220 ' 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:57.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.220 --rc genhtml_branch_coverage=1 00:09:57.220 --rc genhtml_function_coverage=1 00:09:57.220 --rc genhtml_legend=1 00:09:57.220 --rc geninfo_all_blocks=1 00:09:57.220 --rc geninfo_unexecuted_blocks=1 00:09:57.220 00:09:57.220 ' 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:57.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.220 --rc genhtml_branch_coverage=1 00:09:57.220 --rc genhtml_function_coverage=1 00:09:57.220 --rc genhtml_legend=1 00:09:57.220 --rc geninfo_all_blocks=1 00:09:57.220 --rc geninfo_unexecuted_blocks=1 00:09:57.220 00:09:57.220 ' 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5989d9e2-d339-420e-a2f4-bd87604f111f 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:57.220 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:57.220 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:57.221 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:57.221 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:09:57.221 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:09:57.221 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:09:57.221 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:09:57.221 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:09:57.221 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:09:57.221 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:09:57.221 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:09:57.221 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:57.221 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:57.221 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:57.221 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:57.221 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:57.221 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.221 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.221 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.221 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:09:57.221 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:09:57.221 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:09:57.221 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:09:57.221 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:09:57.221 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@458 -- # nvmf_veth_init 00:09:57.221 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:57.221 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:57.221 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:57.221 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:57.221 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:57.221 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:57.221 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:57.221 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:57.221 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:57.221 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:57.221 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:57.221 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:57.221 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:57.221 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:57.221 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:57.221 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:57.221 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:57.221 Cannot find device "nvmf_init_br" 00:09:57.221 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:09:57.221 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:57.221 Cannot find device "nvmf_init_br2" 00:09:57.221 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:09:57.221 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:57.221 Cannot find device "nvmf_tgt_br" 00:09:57.221 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:09:57.221 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:57.221 Cannot find device "nvmf_tgt_br2" 00:09:57.221 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:09:57.221 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:57.221 Cannot find device "nvmf_init_br" 00:09:57.221 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:09:57.221 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:57.221 Cannot find device "nvmf_init_br2" 00:09:57.221 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:09:57.221 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:57.480 Cannot find device "nvmf_tgt_br" 00:09:57.480 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:09:57.480 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:57.480 Cannot find device "nvmf_tgt_br2" 00:09:57.480 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:09:57.480 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:57.480 Cannot find device "nvmf_br" 00:09:57.480 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:09:57.480 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:57.480 Cannot find device "nvmf_init_if" 00:09:57.480 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:09:57.480 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:57.480 Cannot find device "nvmf_init_if2" 00:09:57.480 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:09:57.480 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:57.480 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:57.480 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:09:57.480 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:57.480 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:57.480 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:09:57.480 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:57.480 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:57.480 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:57.480 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:57.480 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:57.480 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:57.480 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:57.480 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:57.480 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:57.480 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:57.480 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:57.480 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:57.480 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:57.480 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:57.480 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:57.480 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:57.480 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:57.480 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:57.480 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:57.480 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:57.480 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:57.480 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:57.480 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:57.480 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:57.739 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:57.739 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:57.739 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:57.739 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:57.739 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:57.739 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:57.739 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:57.739 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:57.739 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:57.739 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:57.739 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:09:57.739 00:09:57.739 --- 10.0.0.3 ping statistics --- 00:09:57.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.739 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:09:57.739 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:57.739 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:57.739 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:09:57.739 00:09:57.739 --- 10.0.0.4 ping statistics --- 00:09:57.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.739 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:09:57.740 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:57.740 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:57.740 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:09:57.740 00:09:57.740 --- 10.0.0.1 ping statistics --- 00:09:57.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.740 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:09:57.740 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:57.740 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:57.740 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:09:57.740 00:09:57.740 --- 10.0.0.2 ping statistics --- 00:09:57.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.740 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:09:57.740 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:57.740 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # return 0 00:09:57.740 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:57.740 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:57.740 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:57.740 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:57.740 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:57.740 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:57.740 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:57.740 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:09:57.740 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:57.740 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:57.740 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.740 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=67248 00:09:57.740 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:09:57.740 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 67248 00:09:57.740 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 67248 ']' 00:09:57.740 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.740 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:57.740 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.740 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:57.740 09:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.999 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:57.999 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:09:57.999 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:57.999 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:57.999 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=67268 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=null 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=0e1ba802487ec95f248b32f97150a316ede548160585a717 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.c0z 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 0e1ba802487ec95f248b32f97150a316ede548160585a717 0 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 0e1ba802487ec95f248b32f97150a316ede548160585a717 0 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=0e1ba802487ec95f248b32f97150a316ede548160585a717 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=0 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.c0z 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.c0z 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.c0z 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=f6acd18d1543cc84c1a9534ba5169e19c5e8a5d6ef45286f24ca525120050f1f 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.USw 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key f6acd18d1543cc84c1a9534ba5169e19c5e8a5d6ef45286f24ca525120050f1f 3 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 f6acd18d1543cc84c1a9534ba5169e19c5e8a5d6ef45286f24ca525120050f1f 3 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=f6acd18d1543cc84c1a9534ba5169e19c5e8a5d6ef45286f24ca525120050f1f 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.USw 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.USw 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.USw 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=ad408c07347f3985637f53b0f02462f0 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.LeF 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key ad408c07347f3985637f53b0f02462f0 1 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 ad408c07347f3985637f53b0f02462f0 1 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=ad408c07347f3985637f53b0f02462f0 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.LeF 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.LeF 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.LeF 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:09:58.259 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:09:58.260 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:09:58.260 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:58.260 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=3199d98fcbb5dae15b7aaca577875958d4ddbe78c6b7bfb9 00:09:58.260 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:09:58.260 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.IDP 00:09:58.260 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 3199d98fcbb5dae15b7aaca577875958d4ddbe78c6b7bfb9 2 00:09:58.260 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 3199d98fcbb5dae15b7aaca577875958d4ddbe78c6b7bfb9 2 00:09:58.260 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:09:58.260 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:09:58.260 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=3199d98fcbb5dae15b7aaca577875958d4ddbe78c6b7bfb9 00:09:58.260 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:09:58.260 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.IDP 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.IDP 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.IDP 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=adc62788da33df2e87e5b0ce61417a87a009af66c6c646ab 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.rwW 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key adc62788da33df2e87e5b0ce61417a87a009af66c6c646ab 2 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 adc62788da33df2e87e5b0ce61417a87a009af66c6c646ab 2 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=adc62788da33df2e87e5b0ce61417a87a009af66c6c646ab 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.rwW 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.rwW 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.rwW 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=42d95f80b59f51c2484ddea080961b5d 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.Gnc 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 42d95f80b59f51c2484ddea080961b5d 1 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 42d95f80b59f51c2484ddea080961b5d 1 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=42d95f80b59f51c2484ddea080961b5d 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.Gnc 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.Gnc 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.Gnc 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=0c5e966f6ac247101a9c1dff9726ef32d3dc1f6856929661d6149123beaf8ef2 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.lDX 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 0c5e966f6ac247101a9c1dff9726ef32d3dc1f6856929661d6149123beaf8ef2 3 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 0c5e966f6ac247101a9c1dff9726ef32d3dc1f6856929661d6149123beaf8ef2 3 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=0c5e966f6ac247101a9c1dff9726ef32d3dc1f6856929661d6149123beaf8ef2 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.lDX 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.lDX 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.lDX 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 67248 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 67248 ']' 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:58.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:58.520 09:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:58.779 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:58.779 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:09:58.779 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 67268 /var/tmp/host.sock 00:09:58.779 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 67268 ']' 00:09:58.779 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:09:58.779 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:58.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:09:58.779 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:09:58.780 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:58.780 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.038 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:59.038 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:09:59.038 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:09:59.038 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.038 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.297 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.297 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:59.297 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.c0z 00:09:59.297 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.297 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.297 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.297 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.c0z 00:09:59.297 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.c0z 00:09:59.297 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.USw ]] 00:09:59.297 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.USw 00:09:59.297 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.297 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.297 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.297 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.USw 00:09:59.297 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.USw 00:09:59.555 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:59.556 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.LeF 00:09:59.556 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.556 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.556 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.556 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.LeF 00:09:59.556 09:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.LeF 00:10:00.124 09:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.IDP ]] 00:10:00.124 09:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.IDP 00:10:00.124 09:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.124 09:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:00.124 09:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.124 09:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.IDP 00:10:00.124 09:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.IDP 00:10:00.124 09:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:10:00.124 09:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.rwW 00:10:00.124 09:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.124 09:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:00.124 09:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.124 09:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.rwW 00:10:00.124 09:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.rwW 00:10:00.383 09:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.Gnc ]] 00:10:00.383 09:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Gnc 00:10:00.383 09:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.383 09:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:00.383 09:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.383 09:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Gnc 00:10:00.384 09:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Gnc 00:10:00.642 09:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:10:00.642 09:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.lDX 00:10:00.642 09:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.642 09:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:00.642 09:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.642 09:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.lDX 00:10:00.642 09:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.lDX 00:10:00.900 09:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:10:00.900 09:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:10:00.900 09:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:00.900 09:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:00.900 09:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:00.900 09:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:01.159 09:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:10:01.159 09:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:01.159 09:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:01.159 09:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:01.159 09:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:01.159 09:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:01.159 09:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:01.159 09:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.159 09:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.159 09:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.159 09:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:01.159 09:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:01.159 09:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:01.726 00:10:01.726 09:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:01.726 09:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:01.726 09:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:01.984 09:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:01.984 09:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:01.984 09:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.984 09:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.984 09:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.984 09:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:01.984 { 00:10:01.984 "cntlid": 1, 00:10:01.984 "qid": 0, 00:10:01.984 "state": "enabled", 00:10:01.984 "thread": "nvmf_tgt_poll_group_000", 00:10:01.984 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:10:01.984 "listen_address": { 00:10:01.984 "trtype": "TCP", 00:10:01.984 "adrfam": "IPv4", 00:10:01.984 "traddr": "10.0.0.3", 00:10:01.984 "trsvcid": "4420" 00:10:01.984 }, 00:10:01.984 "peer_address": { 00:10:01.984 "trtype": "TCP", 00:10:01.984 "adrfam": "IPv4", 00:10:01.984 "traddr": "10.0.0.1", 00:10:01.984 "trsvcid": "40908" 00:10:01.984 }, 00:10:01.984 "auth": { 00:10:01.984 "state": "completed", 00:10:01.984 "digest": "sha256", 00:10:01.984 "dhgroup": "null" 00:10:01.984 } 00:10:01.984 } 00:10:01.984 ]' 00:10:01.984 09:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:01.984 09:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:01.984 09:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:01.984 09:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:01.984 09:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:01.984 09:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:01.984 09:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:01.984 09:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:02.242 09:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGUxYmE4MDI0ODdlYzk1ZjI0OGIzMmY5NzE1MGEzMTZlZGU1NDgxNjA1ODVhNzE3bCjLEA==: --dhchap-ctrl-secret DHHC-1:03:ZjZhY2QxOGQxNTQzY2M4NGMxYTk1MzRiYTUxNjllMTljNWU4YTVkNmVmNDUyODZmMjRjYTUyNTEyMDA1MGYxZkGtyhU=: 00:10:02.242 09:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:00:MGUxYmE4MDI0ODdlYzk1ZjI0OGIzMmY5NzE1MGEzMTZlZGU1NDgxNjA1ODVhNzE3bCjLEA==: --dhchap-ctrl-secret DHHC-1:03:ZjZhY2QxOGQxNTQzY2M4NGMxYTk1MzRiYTUxNjllMTljNWU4YTVkNmVmNDUyODZmMjRjYTUyNTEyMDA1MGYxZkGtyhU=: 00:10:06.430 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:06.430 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:06.430 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:10:06.430 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.430 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:06.430 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.430 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:06.430 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:06.430 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:06.430 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:10:06.430 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:06.430 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:06.430 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:06.430 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:06.430 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:06.430 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:06.430 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.430 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:06.430 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.430 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:06.430 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:06.430 09:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:06.997 00:10:06.997 09:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:06.997 09:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:06.997 09:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:07.256 09:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:07.256 09:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:07.256 09:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.256 09:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:07.256 09:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.256 09:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:07.256 { 00:10:07.256 "cntlid": 3, 00:10:07.256 "qid": 0, 00:10:07.256 "state": "enabled", 00:10:07.256 "thread": "nvmf_tgt_poll_group_000", 00:10:07.256 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:10:07.256 "listen_address": { 00:10:07.256 "trtype": "TCP", 00:10:07.256 "adrfam": "IPv4", 00:10:07.256 "traddr": "10.0.0.3", 00:10:07.256 "trsvcid": "4420" 00:10:07.256 }, 00:10:07.256 "peer_address": { 00:10:07.256 "trtype": "TCP", 00:10:07.256 "adrfam": "IPv4", 00:10:07.256 "traddr": "10.0.0.1", 00:10:07.256 "trsvcid": "51528" 00:10:07.256 }, 00:10:07.256 "auth": { 00:10:07.256 "state": "completed", 00:10:07.256 "digest": "sha256", 00:10:07.256 "dhgroup": "null" 00:10:07.256 } 00:10:07.256 } 00:10:07.256 ]' 00:10:07.256 09:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:07.256 09:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:07.256 09:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:07.256 09:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:07.256 09:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:07.256 09:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:07.256 09:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:07.256 09:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:07.515 09:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWQ0MDhjMDczNDdmMzk4NTYzN2Y1M2IwZjAyNDYyZjCKlYWr: --dhchap-ctrl-secret DHHC-1:02:MzE5OWQ5OGZjYmI1ZGFlMTViN2FhY2E1Nzc4NzU5NThkNGRkYmU3OGM2YjdiZmI5Mk3gaA==: 00:10:07.515 09:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:01:YWQ0MDhjMDczNDdmMzk4NTYzN2Y1M2IwZjAyNDYyZjCKlYWr: --dhchap-ctrl-secret DHHC-1:02:MzE5OWQ5OGZjYmI1ZGFlMTViN2FhY2E1Nzc4NzU5NThkNGRkYmU3OGM2YjdiZmI5Mk3gaA==: 00:10:08.449 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:08.449 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:08.449 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:10:08.449 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.449 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.449 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.449 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:08.449 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:08.449 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:08.449 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:10:08.449 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:08.449 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:08.449 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:08.449 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:08.449 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:08.449 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:08.449 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.449 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.449 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.449 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:08.449 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:08.449 09:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:09.017 00:10:09.017 09:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:09.017 09:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:09.017 09:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:09.276 09:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:09.276 09:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:09.276 09:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.276 09:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.276 09:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.276 09:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:09.276 { 00:10:09.276 "cntlid": 5, 00:10:09.276 "qid": 0, 00:10:09.276 "state": "enabled", 00:10:09.276 "thread": "nvmf_tgt_poll_group_000", 00:10:09.276 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:10:09.276 "listen_address": { 00:10:09.276 "trtype": "TCP", 00:10:09.276 "adrfam": "IPv4", 00:10:09.276 "traddr": "10.0.0.3", 00:10:09.276 "trsvcid": "4420" 00:10:09.276 }, 00:10:09.276 "peer_address": { 00:10:09.276 "trtype": "TCP", 00:10:09.276 "adrfam": "IPv4", 00:10:09.276 "traddr": "10.0.0.1", 00:10:09.276 "trsvcid": "51550" 00:10:09.276 }, 00:10:09.276 "auth": { 00:10:09.276 "state": "completed", 00:10:09.276 "digest": "sha256", 00:10:09.276 "dhgroup": "null" 00:10:09.276 } 00:10:09.276 } 00:10:09.276 ]' 00:10:09.276 09:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:09.276 09:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:09.276 09:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:09.276 09:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:09.276 09:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:09.276 09:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:09.276 09:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:09.276 09:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:09.845 09:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWRjNjI3ODhkYTMzZGYyZTg3ZTViMGNlNjE0MTdhODdhMDA5YWY2NmM2YzY0NmFiEctxFQ==: --dhchap-ctrl-secret DHHC-1:01:NDJkOTVmODBiNTlmNTFjMjQ4NGRkZWEwODA5NjFiNWT3vTE2: 00:10:09.845 09:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:02:YWRjNjI3ODhkYTMzZGYyZTg3ZTViMGNlNjE0MTdhODdhMDA5YWY2NmM2YzY0NmFiEctxFQ==: --dhchap-ctrl-secret DHHC-1:01:NDJkOTVmODBiNTlmNTFjMjQ4NGRkZWEwODA5NjFiNWT3vTE2: 00:10:10.413 09:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:10.413 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:10.413 09:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:10:10.413 09:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.413 09:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.413 09:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.413 09:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:10.413 09:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:10.413 09:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:10.672 09:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:10:10.672 09:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:10.672 09:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:10.672 09:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:10.672 09:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:10.672 09:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:10.672 09:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key3 00:10:10.672 09:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.672 09:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.672 09:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.672 09:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:10.672 09:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:10.672 09:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:10.931 00:10:10.931 09:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:10.931 09:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:10.931 09:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:11.190 09:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:11.190 09:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:11.190 09:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.190 09:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.190 09:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.190 09:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:11.190 { 00:10:11.190 "cntlid": 7, 00:10:11.190 "qid": 0, 00:10:11.190 "state": "enabled", 00:10:11.190 "thread": "nvmf_tgt_poll_group_000", 00:10:11.190 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:10:11.190 "listen_address": { 00:10:11.190 "trtype": "TCP", 00:10:11.190 "adrfam": "IPv4", 00:10:11.190 "traddr": "10.0.0.3", 00:10:11.190 "trsvcid": "4420" 00:10:11.190 }, 00:10:11.190 "peer_address": { 00:10:11.190 "trtype": "TCP", 00:10:11.190 "adrfam": "IPv4", 00:10:11.190 "traddr": "10.0.0.1", 00:10:11.190 "trsvcid": "51570" 00:10:11.190 }, 00:10:11.190 "auth": { 00:10:11.190 "state": "completed", 00:10:11.190 "digest": "sha256", 00:10:11.190 "dhgroup": "null" 00:10:11.190 } 00:10:11.190 } 00:10:11.190 ]' 00:10:11.190 09:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:11.190 09:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:11.190 09:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:11.449 09:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:11.449 09:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:11.449 09:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:11.449 09:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:11.449 09:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:11.708 09:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGM1ZTk2NmY2YWMyNDcxMDFhOWMxZGZmOTcyNmVmMzJkM2RjMWY2ODU2OTI5NjYxZDYxNDkxMjNiZWFmOGVmMrN9cww=: 00:10:11.708 09:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:03:MGM1ZTk2NmY2YWMyNDcxMDFhOWMxZGZmOTcyNmVmMzJkM2RjMWY2ODU2OTI5NjYxZDYxNDkxMjNiZWFmOGVmMrN9cww=: 00:10:12.275 09:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:12.275 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:12.275 09:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:10:12.275 09:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.275 09:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.275 09:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.275 09:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:12.275 09:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:12.275 09:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:12.275 09:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:12.534 09:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:10:12.534 09:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:12.534 09:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:12.534 09:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:12.534 09:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:12.534 09:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:12.534 09:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:12.534 09:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.534 09:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.534 09:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.534 09:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:12.534 09:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:12.534 09:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:13.100 00:10:13.100 09:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:13.100 09:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:13.100 09:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:13.100 09:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:13.377 09:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:13.377 09:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.377 09:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.377 09:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.377 09:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:13.377 { 00:10:13.377 "cntlid": 9, 00:10:13.377 "qid": 0, 00:10:13.377 "state": "enabled", 00:10:13.377 "thread": "nvmf_tgt_poll_group_000", 00:10:13.377 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:10:13.377 "listen_address": { 00:10:13.377 "trtype": "TCP", 00:10:13.377 "adrfam": "IPv4", 00:10:13.377 "traddr": "10.0.0.3", 00:10:13.377 "trsvcid": "4420" 00:10:13.377 }, 00:10:13.377 "peer_address": { 00:10:13.377 "trtype": "TCP", 00:10:13.377 "adrfam": "IPv4", 00:10:13.377 "traddr": "10.0.0.1", 00:10:13.377 "trsvcid": "51604" 00:10:13.377 }, 00:10:13.377 "auth": { 00:10:13.377 "state": "completed", 00:10:13.377 "digest": "sha256", 00:10:13.377 "dhgroup": "ffdhe2048" 00:10:13.377 } 00:10:13.377 } 00:10:13.377 ]' 00:10:13.377 09:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:13.377 09:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:13.377 09:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:13.377 09:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:13.377 09:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:13.377 09:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:13.377 09:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:13.377 09:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:13.647 09:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGUxYmE4MDI0ODdlYzk1ZjI0OGIzMmY5NzE1MGEzMTZlZGU1NDgxNjA1ODVhNzE3bCjLEA==: --dhchap-ctrl-secret DHHC-1:03:ZjZhY2QxOGQxNTQzY2M4NGMxYTk1MzRiYTUxNjllMTljNWU4YTVkNmVmNDUyODZmMjRjYTUyNTEyMDA1MGYxZkGtyhU=: 00:10:13.647 09:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:00:MGUxYmE4MDI0ODdlYzk1ZjI0OGIzMmY5NzE1MGEzMTZlZGU1NDgxNjA1ODVhNzE3bCjLEA==: --dhchap-ctrl-secret DHHC-1:03:ZjZhY2QxOGQxNTQzY2M4NGMxYTk1MzRiYTUxNjllMTljNWU4YTVkNmVmNDUyODZmMjRjYTUyNTEyMDA1MGYxZkGtyhU=: 00:10:14.211 09:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:14.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:14.211 09:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:10:14.211 09:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.211 09:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:14.211 09:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.211 09:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:14.211 09:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:14.211 09:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:14.469 09:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:10:14.469 09:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:14.469 09:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:14.469 09:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:14.469 09:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:14.469 09:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:14.469 09:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:14.469 09:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.469 09:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:14.469 09:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.469 09:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:14.469 09:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:14.469 09:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:15.036 00:10:15.036 09:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:15.036 09:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:15.036 09:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:15.036 09:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:15.036 09:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:15.036 09:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.036 09:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.036 09:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.036 09:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:15.036 { 00:10:15.036 "cntlid": 11, 00:10:15.036 "qid": 0, 00:10:15.036 "state": "enabled", 00:10:15.036 "thread": "nvmf_tgt_poll_group_000", 00:10:15.036 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:10:15.036 "listen_address": { 00:10:15.036 "trtype": "TCP", 00:10:15.036 "adrfam": "IPv4", 00:10:15.036 "traddr": "10.0.0.3", 00:10:15.036 "trsvcid": "4420" 00:10:15.036 }, 00:10:15.036 "peer_address": { 00:10:15.036 "trtype": "TCP", 00:10:15.036 "adrfam": "IPv4", 00:10:15.036 "traddr": "10.0.0.1", 00:10:15.036 "trsvcid": "44110" 00:10:15.036 }, 00:10:15.036 "auth": { 00:10:15.036 "state": "completed", 00:10:15.036 "digest": "sha256", 00:10:15.036 "dhgroup": "ffdhe2048" 00:10:15.036 } 00:10:15.036 } 00:10:15.036 ]' 00:10:15.036 09:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:15.295 09:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:15.295 09:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:15.295 09:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:15.295 09:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:15.295 09:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:15.295 09:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:15.295 09:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:15.553 09:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWQ0MDhjMDczNDdmMzk4NTYzN2Y1M2IwZjAyNDYyZjCKlYWr: --dhchap-ctrl-secret DHHC-1:02:MzE5OWQ5OGZjYmI1ZGFlMTViN2FhY2E1Nzc4NzU5NThkNGRkYmU3OGM2YjdiZmI5Mk3gaA==: 00:10:15.553 09:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:01:YWQ0MDhjMDczNDdmMzk4NTYzN2Y1M2IwZjAyNDYyZjCKlYWr: --dhchap-ctrl-secret DHHC-1:02:MzE5OWQ5OGZjYmI1ZGFlMTViN2FhY2E1Nzc4NzU5NThkNGRkYmU3OGM2YjdiZmI5Mk3gaA==: 00:10:16.120 09:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:16.121 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:16.121 09:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:10:16.121 09:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.121 09:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.121 09:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.121 09:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:16.121 09:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:16.121 09:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:16.689 09:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:10:16.689 09:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:16.689 09:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:16.689 09:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:16.689 09:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:16.689 09:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:16.689 09:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:16.689 09:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.689 09:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.689 09:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.689 09:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:16.689 09:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:16.689 09:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:16.947 00:10:16.947 09:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:16.947 09:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:16.947 09:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:17.206 09:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:17.206 09:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:17.206 09:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.206 09:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.206 09:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.206 09:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:17.206 { 00:10:17.206 "cntlid": 13, 00:10:17.206 "qid": 0, 00:10:17.206 "state": "enabled", 00:10:17.206 "thread": "nvmf_tgt_poll_group_000", 00:10:17.206 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:10:17.206 "listen_address": { 00:10:17.206 "trtype": "TCP", 00:10:17.206 "adrfam": "IPv4", 00:10:17.206 "traddr": "10.0.0.3", 00:10:17.206 "trsvcid": "4420" 00:10:17.206 }, 00:10:17.206 "peer_address": { 00:10:17.206 "trtype": "TCP", 00:10:17.206 "adrfam": "IPv4", 00:10:17.206 "traddr": "10.0.0.1", 00:10:17.206 "trsvcid": "44138" 00:10:17.206 }, 00:10:17.206 "auth": { 00:10:17.206 "state": "completed", 00:10:17.206 "digest": "sha256", 00:10:17.206 "dhgroup": "ffdhe2048" 00:10:17.206 } 00:10:17.206 } 00:10:17.206 ]' 00:10:17.206 09:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:17.206 09:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:17.206 09:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:17.206 09:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:17.206 09:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:17.465 09:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:17.465 09:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:17.465 09:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:17.465 09:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWRjNjI3ODhkYTMzZGYyZTg3ZTViMGNlNjE0MTdhODdhMDA5YWY2NmM2YzY0NmFiEctxFQ==: --dhchap-ctrl-secret DHHC-1:01:NDJkOTVmODBiNTlmNTFjMjQ4NGRkZWEwODA5NjFiNWT3vTE2: 00:10:17.465 09:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:02:YWRjNjI3ODhkYTMzZGYyZTg3ZTViMGNlNjE0MTdhODdhMDA5YWY2NmM2YzY0NmFiEctxFQ==: --dhchap-ctrl-secret DHHC-1:01:NDJkOTVmODBiNTlmNTFjMjQ4NGRkZWEwODA5NjFiNWT3vTE2: 00:10:18.033 09:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:18.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:18.292 09:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:10:18.292 09:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.292 09:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.292 09:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.292 09:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:18.292 09:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:18.292 09:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:18.552 09:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:10:18.552 09:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:18.552 09:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:18.552 09:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:18.552 09:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:18.552 09:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:18.552 09:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key3 00:10:18.552 09:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.552 09:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.552 09:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.552 09:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:18.552 09:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:18.552 09:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:18.810 00:10:18.810 09:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:18.810 09:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:18.810 09:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:19.069 09:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:19.069 09:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:19.069 09:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.069 09:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.069 09:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.069 09:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:19.069 { 00:10:19.069 "cntlid": 15, 00:10:19.069 "qid": 0, 00:10:19.069 "state": "enabled", 00:10:19.069 "thread": "nvmf_tgt_poll_group_000", 00:10:19.069 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:10:19.069 "listen_address": { 00:10:19.069 "trtype": "TCP", 00:10:19.069 "adrfam": "IPv4", 00:10:19.069 "traddr": "10.0.0.3", 00:10:19.069 "trsvcid": "4420" 00:10:19.069 }, 00:10:19.069 "peer_address": { 00:10:19.069 "trtype": "TCP", 00:10:19.069 "adrfam": "IPv4", 00:10:19.069 "traddr": "10.0.0.1", 00:10:19.069 "trsvcid": "44170" 00:10:19.069 }, 00:10:19.069 "auth": { 00:10:19.069 "state": "completed", 00:10:19.069 "digest": "sha256", 00:10:19.069 "dhgroup": "ffdhe2048" 00:10:19.069 } 00:10:19.069 } 00:10:19.069 ]' 00:10:19.069 09:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:19.328 09:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:19.328 09:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:19.328 09:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:19.328 09:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:19.328 09:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:19.328 09:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:19.328 09:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:19.587 09:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGM1ZTk2NmY2YWMyNDcxMDFhOWMxZGZmOTcyNmVmMzJkM2RjMWY2ODU2OTI5NjYxZDYxNDkxMjNiZWFmOGVmMrN9cww=: 00:10:19.587 09:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:03:MGM1ZTk2NmY2YWMyNDcxMDFhOWMxZGZmOTcyNmVmMzJkM2RjMWY2ODU2OTI5NjYxZDYxNDkxMjNiZWFmOGVmMrN9cww=: 00:10:20.524 09:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:20.524 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:20.524 09:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:10:20.524 09:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.524 09:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.524 09:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.524 09:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:20.524 09:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:20.524 09:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:20.524 09:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:20.524 09:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:10:20.524 09:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:20.524 09:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:20.524 09:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:20.524 09:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:20.524 09:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:20.524 09:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:20.524 09:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.524 09:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.524 09:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.524 09:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:20.524 09:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:20.524 09:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:21.091 00:10:21.091 09:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:21.091 09:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:21.091 09:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:21.350 09:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:21.350 09:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:21.350 09:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.350 09:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.350 09:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.350 09:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:21.350 { 00:10:21.350 "cntlid": 17, 00:10:21.350 "qid": 0, 00:10:21.350 "state": "enabled", 00:10:21.350 "thread": "nvmf_tgt_poll_group_000", 00:10:21.350 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:10:21.350 "listen_address": { 00:10:21.350 "trtype": "TCP", 00:10:21.350 "adrfam": "IPv4", 00:10:21.350 "traddr": "10.0.0.3", 00:10:21.350 "trsvcid": "4420" 00:10:21.350 }, 00:10:21.350 "peer_address": { 00:10:21.350 "trtype": "TCP", 00:10:21.350 "adrfam": "IPv4", 00:10:21.350 "traddr": "10.0.0.1", 00:10:21.350 "trsvcid": "44188" 00:10:21.350 }, 00:10:21.350 "auth": { 00:10:21.350 "state": "completed", 00:10:21.350 "digest": "sha256", 00:10:21.350 "dhgroup": "ffdhe3072" 00:10:21.350 } 00:10:21.350 } 00:10:21.350 ]' 00:10:21.350 09:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:21.350 09:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:21.350 09:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:21.350 09:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:21.350 09:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:21.350 09:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:21.350 09:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:21.350 09:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:21.609 09:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGUxYmE4MDI0ODdlYzk1ZjI0OGIzMmY5NzE1MGEzMTZlZGU1NDgxNjA1ODVhNzE3bCjLEA==: --dhchap-ctrl-secret DHHC-1:03:ZjZhY2QxOGQxNTQzY2M4NGMxYTk1MzRiYTUxNjllMTljNWU4YTVkNmVmNDUyODZmMjRjYTUyNTEyMDA1MGYxZkGtyhU=: 00:10:21.609 09:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:00:MGUxYmE4MDI0ODdlYzk1ZjI0OGIzMmY5NzE1MGEzMTZlZGU1NDgxNjA1ODVhNzE3bCjLEA==: --dhchap-ctrl-secret DHHC-1:03:ZjZhY2QxOGQxNTQzY2M4NGMxYTk1MzRiYTUxNjllMTljNWU4YTVkNmVmNDUyODZmMjRjYTUyNTEyMDA1MGYxZkGtyhU=: 00:10:22.177 09:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:22.177 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:22.177 09:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:10:22.177 09:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.177 09:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.177 09:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.177 09:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:22.177 09:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:22.177 09:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:22.745 09:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:10:22.745 09:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:22.745 09:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:22.745 09:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:22.745 09:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:22.745 09:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:22.745 09:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:22.745 09:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.745 09:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.745 09:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.745 09:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:22.745 09:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:22.745 09:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:22.745 00:10:23.004 09:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:23.004 09:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:23.004 09:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:23.263 09:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:23.263 09:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:23.263 09:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.263 09:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.263 09:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.263 09:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:23.263 { 00:10:23.263 "cntlid": 19, 00:10:23.263 "qid": 0, 00:10:23.263 "state": "enabled", 00:10:23.263 "thread": "nvmf_tgt_poll_group_000", 00:10:23.263 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:10:23.263 "listen_address": { 00:10:23.263 "trtype": "TCP", 00:10:23.263 "adrfam": "IPv4", 00:10:23.263 "traddr": "10.0.0.3", 00:10:23.263 "trsvcid": "4420" 00:10:23.263 }, 00:10:23.263 "peer_address": { 00:10:23.263 "trtype": "TCP", 00:10:23.263 "adrfam": "IPv4", 00:10:23.263 "traddr": "10.0.0.1", 00:10:23.263 "trsvcid": "44228" 00:10:23.263 }, 00:10:23.263 "auth": { 00:10:23.263 "state": "completed", 00:10:23.263 "digest": "sha256", 00:10:23.263 "dhgroup": "ffdhe3072" 00:10:23.263 } 00:10:23.263 } 00:10:23.263 ]' 00:10:23.263 09:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:23.263 09:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:23.263 09:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:23.263 09:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:23.263 09:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:23.263 09:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:23.263 09:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:23.263 09:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:23.830 09:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWQ0MDhjMDczNDdmMzk4NTYzN2Y1M2IwZjAyNDYyZjCKlYWr: --dhchap-ctrl-secret DHHC-1:02:MzE5OWQ5OGZjYmI1ZGFlMTViN2FhY2E1Nzc4NzU5NThkNGRkYmU3OGM2YjdiZmI5Mk3gaA==: 00:10:23.830 09:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:01:YWQ0MDhjMDczNDdmMzk4NTYzN2Y1M2IwZjAyNDYyZjCKlYWr: --dhchap-ctrl-secret DHHC-1:02:MzE5OWQ5OGZjYmI1ZGFlMTViN2FhY2E1Nzc4NzU5NThkNGRkYmU3OGM2YjdiZmI5Mk3gaA==: 00:10:24.398 09:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:24.398 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:24.398 09:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:10:24.398 09:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.398 09:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.398 09:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.398 09:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:24.398 09:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:24.398 09:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:24.658 09:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:10:24.658 09:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:24.658 09:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:24.658 09:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:24.658 09:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:24.658 09:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:24.658 09:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:24.658 09:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.658 09:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.658 09:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.658 09:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:24.658 09:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:24.658 09:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:24.922 00:10:24.922 09:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:24.922 09:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:24.922 09:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:25.181 09:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:25.181 09:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:25.181 09:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.181 09:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.181 09:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.181 09:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:25.181 { 00:10:25.181 "cntlid": 21, 00:10:25.181 "qid": 0, 00:10:25.181 "state": "enabled", 00:10:25.181 "thread": "nvmf_tgt_poll_group_000", 00:10:25.181 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:10:25.181 "listen_address": { 00:10:25.181 "trtype": "TCP", 00:10:25.181 "adrfam": "IPv4", 00:10:25.181 "traddr": "10.0.0.3", 00:10:25.181 "trsvcid": "4420" 00:10:25.181 }, 00:10:25.181 "peer_address": { 00:10:25.181 "trtype": "TCP", 00:10:25.181 "adrfam": "IPv4", 00:10:25.181 "traddr": "10.0.0.1", 00:10:25.181 "trsvcid": "39220" 00:10:25.181 }, 00:10:25.181 "auth": { 00:10:25.181 "state": "completed", 00:10:25.181 "digest": "sha256", 00:10:25.181 "dhgroup": "ffdhe3072" 00:10:25.181 } 00:10:25.181 } 00:10:25.181 ]' 00:10:25.181 09:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:25.181 09:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:25.181 09:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:25.440 09:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:25.440 09:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:25.440 09:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:25.440 09:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:25.440 09:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:25.698 09:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWRjNjI3ODhkYTMzZGYyZTg3ZTViMGNlNjE0MTdhODdhMDA5YWY2NmM2YzY0NmFiEctxFQ==: --dhchap-ctrl-secret DHHC-1:01:NDJkOTVmODBiNTlmNTFjMjQ4NGRkZWEwODA5NjFiNWT3vTE2: 00:10:25.698 09:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:02:YWRjNjI3ODhkYTMzZGYyZTg3ZTViMGNlNjE0MTdhODdhMDA5YWY2NmM2YzY0NmFiEctxFQ==: --dhchap-ctrl-secret DHHC-1:01:NDJkOTVmODBiNTlmNTFjMjQ4NGRkZWEwODA5NjFiNWT3vTE2: 00:10:26.266 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:26.266 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:26.266 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:10:26.266 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.266 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.266 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.266 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:26.266 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:26.266 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:26.525 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:10:26.525 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:26.525 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:26.525 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:26.525 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:26.525 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:26.525 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key3 00:10:26.525 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.525 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.525 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.525 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:26.525 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:26.525 09:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:26.784 00:10:26.784 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:26.784 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:26.784 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:27.043 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:27.043 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:27.043 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.043 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.043 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.043 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:27.043 { 00:10:27.043 "cntlid": 23, 00:10:27.043 "qid": 0, 00:10:27.043 "state": "enabled", 00:10:27.043 "thread": "nvmf_tgt_poll_group_000", 00:10:27.043 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:10:27.043 "listen_address": { 00:10:27.043 "trtype": "TCP", 00:10:27.043 "adrfam": "IPv4", 00:10:27.043 "traddr": "10.0.0.3", 00:10:27.043 "trsvcid": "4420" 00:10:27.043 }, 00:10:27.043 "peer_address": { 00:10:27.043 "trtype": "TCP", 00:10:27.043 "adrfam": "IPv4", 00:10:27.043 "traddr": "10.0.0.1", 00:10:27.043 "trsvcid": "39250" 00:10:27.043 }, 00:10:27.043 "auth": { 00:10:27.043 "state": "completed", 00:10:27.043 "digest": "sha256", 00:10:27.043 "dhgroup": "ffdhe3072" 00:10:27.043 } 00:10:27.043 } 00:10:27.043 ]' 00:10:27.043 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:27.043 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:27.043 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:27.043 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:27.043 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:27.302 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:27.302 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:27.302 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:27.302 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGM1ZTk2NmY2YWMyNDcxMDFhOWMxZGZmOTcyNmVmMzJkM2RjMWY2ODU2OTI5NjYxZDYxNDkxMjNiZWFmOGVmMrN9cww=: 00:10:27.302 09:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:03:MGM1ZTk2NmY2YWMyNDcxMDFhOWMxZGZmOTcyNmVmMzJkM2RjMWY2ODU2OTI5NjYxZDYxNDkxMjNiZWFmOGVmMrN9cww=: 00:10:28.238 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:28.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:28.238 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:10:28.238 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.238 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.238 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.238 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:28.238 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:28.238 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:28.238 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:28.496 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:10:28.496 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:28.496 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:28.496 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:28.496 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:28.496 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:28.497 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:28.497 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.497 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.497 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.497 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:28.497 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:28.497 09:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:28.756 00:10:28.756 09:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:28.756 09:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:28.756 09:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:29.015 09:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:29.015 09:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:29.015 09:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.015 09:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.015 09:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.015 09:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:29.015 { 00:10:29.015 "cntlid": 25, 00:10:29.015 "qid": 0, 00:10:29.015 "state": "enabled", 00:10:29.015 "thread": "nvmf_tgt_poll_group_000", 00:10:29.015 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:10:29.015 "listen_address": { 00:10:29.015 "trtype": "TCP", 00:10:29.015 "adrfam": "IPv4", 00:10:29.015 "traddr": "10.0.0.3", 00:10:29.015 "trsvcid": "4420" 00:10:29.015 }, 00:10:29.015 "peer_address": { 00:10:29.015 "trtype": "TCP", 00:10:29.015 "adrfam": "IPv4", 00:10:29.015 "traddr": "10.0.0.1", 00:10:29.015 "trsvcid": "39268" 00:10:29.015 }, 00:10:29.015 "auth": { 00:10:29.015 "state": "completed", 00:10:29.015 "digest": "sha256", 00:10:29.015 "dhgroup": "ffdhe4096" 00:10:29.015 } 00:10:29.015 } 00:10:29.015 ]' 00:10:29.015 09:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:29.274 09:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:29.274 09:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:29.274 09:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:29.274 09:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:29.274 09:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:29.274 09:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:29.274 09:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:29.533 09:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGUxYmE4MDI0ODdlYzk1ZjI0OGIzMmY5NzE1MGEzMTZlZGU1NDgxNjA1ODVhNzE3bCjLEA==: --dhchap-ctrl-secret DHHC-1:03:ZjZhY2QxOGQxNTQzY2M4NGMxYTk1MzRiYTUxNjllMTljNWU4YTVkNmVmNDUyODZmMjRjYTUyNTEyMDA1MGYxZkGtyhU=: 00:10:29.533 09:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:00:MGUxYmE4MDI0ODdlYzk1ZjI0OGIzMmY5NzE1MGEzMTZlZGU1NDgxNjA1ODVhNzE3bCjLEA==: --dhchap-ctrl-secret DHHC-1:03:ZjZhY2QxOGQxNTQzY2M4NGMxYTk1MzRiYTUxNjllMTljNWU4YTVkNmVmNDUyODZmMjRjYTUyNTEyMDA1MGYxZkGtyhU=: 00:10:30.101 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:30.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:30.101 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:10:30.101 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.101 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.101 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.101 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:30.101 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:30.101 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:30.359 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:10:30.359 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:30.359 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:30.359 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:30.359 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:30.359 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:30.359 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:30.359 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.359 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.359 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.359 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:30.359 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:30.359 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:30.618 00:10:30.618 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:30.618 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:30.618 09:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:30.877 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:30.877 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:30.877 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.877 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.136 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.136 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:31.136 { 00:10:31.136 "cntlid": 27, 00:10:31.136 "qid": 0, 00:10:31.136 "state": "enabled", 00:10:31.136 "thread": "nvmf_tgt_poll_group_000", 00:10:31.136 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:10:31.136 "listen_address": { 00:10:31.136 "trtype": "TCP", 00:10:31.136 "adrfam": "IPv4", 00:10:31.137 "traddr": "10.0.0.3", 00:10:31.137 "trsvcid": "4420" 00:10:31.137 }, 00:10:31.137 "peer_address": { 00:10:31.137 "trtype": "TCP", 00:10:31.137 "adrfam": "IPv4", 00:10:31.137 "traddr": "10.0.0.1", 00:10:31.137 "trsvcid": "39284" 00:10:31.137 }, 00:10:31.137 "auth": { 00:10:31.137 "state": "completed", 00:10:31.137 "digest": "sha256", 00:10:31.137 "dhgroup": "ffdhe4096" 00:10:31.137 } 00:10:31.137 } 00:10:31.137 ]' 00:10:31.137 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:31.137 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:31.137 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:31.137 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:31.137 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:31.137 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:31.137 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:31.137 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:31.416 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWQ0MDhjMDczNDdmMzk4NTYzN2Y1M2IwZjAyNDYyZjCKlYWr: --dhchap-ctrl-secret DHHC-1:02:MzE5OWQ5OGZjYmI1ZGFlMTViN2FhY2E1Nzc4NzU5NThkNGRkYmU3OGM2YjdiZmI5Mk3gaA==: 00:10:31.416 09:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:01:YWQ0MDhjMDczNDdmMzk4NTYzN2Y1M2IwZjAyNDYyZjCKlYWr: --dhchap-ctrl-secret DHHC-1:02:MzE5OWQ5OGZjYmI1ZGFlMTViN2FhY2E1Nzc4NzU5NThkNGRkYmU3OGM2YjdiZmI5Mk3gaA==: 00:10:32.000 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:32.000 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:32.000 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:10:32.000 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.000 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.000 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.000 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:32.000 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:32.000 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:32.259 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:10:32.259 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:32.259 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:32.259 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:32.259 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:32.259 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:32.259 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:32.259 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.259 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.259 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.259 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:32.259 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:32.259 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:32.518 00:10:32.518 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:32.518 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:32.518 09:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:32.777 09:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:32.777 09:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:32.777 09:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.777 09:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.035 09:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.035 09:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:33.035 { 00:10:33.035 "cntlid": 29, 00:10:33.035 "qid": 0, 00:10:33.035 "state": "enabled", 00:10:33.035 "thread": "nvmf_tgt_poll_group_000", 00:10:33.035 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:10:33.035 "listen_address": { 00:10:33.035 "trtype": "TCP", 00:10:33.035 "adrfam": "IPv4", 00:10:33.035 "traddr": "10.0.0.3", 00:10:33.035 "trsvcid": "4420" 00:10:33.035 }, 00:10:33.035 "peer_address": { 00:10:33.035 "trtype": "TCP", 00:10:33.035 "adrfam": "IPv4", 00:10:33.035 "traddr": "10.0.0.1", 00:10:33.035 "trsvcid": "39318" 00:10:33.035 }, 00:10:33.035 "auth": { 00:10:33.035 "state": "completed", 00:10:33.035 "digest": "sha256", 00:10:33.035 "dhgroup": "ffdhe4096" 00:10:33.035 } 00:10:33.035 } 00:10:33.035 ]' 00:10:33.035 09:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:33.035 09:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:33.035 09:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:33.035 09:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:33.035 09:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:33.035 09:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:33.035 09:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:33.035 09:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:33.293 09:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWRjNjI3ODhkYTMzZGYyZTg3ZTViMGNlNjE0MTdhODdhMDA5YWY2NmM2YzY0NmFiEctxFQ==: --dhchap-ctrl-secret DHHC-1:01:NDJkOTVmODBiNTlmNTFjMjQ4NGRkZWEwODA5NjFiNWT3vTE2: 00:10:33.293 09:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:02:YWRjNjI3ODhkYTMzZGYyZTg3ZTViMGNlNjE0MTdhODdhMDA5YWY2NmM2YzY0NmFiEctxFQ==: --dhchap-ctrl-secret DHHC-1:01:NDJkOTVmODBiNTlmNTFjMjQ4NGRkZWEwODA5NjFiNWT3vTE2: 00:10:33.860 09:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:33.860 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:33.860 09:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:10:33.860 09:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.860 09:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.860 09:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.860 09:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:33.860 09:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:33.860 09:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:34.119 09:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:10:34.119 09:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:34.119 09:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:34.119 09:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:34.119 09:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:34.119 09:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:34.119 09:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key3 00:10:34.119 09:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.119 09:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.119 09:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.119 09:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:34.119 09:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:34.119 09:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:34.378 00:10:34.378 09:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:34.378 09:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:34.378 09:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:34.637 09:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:34.637 09:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:34.637 09:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.638 09:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.638 09:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.638 09:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:34.638 { 00:10:34.638 "cntlid": 31, 00:10:34.638 "qid": 0, 00:10:34.638 "state": "enabled", 00:10:34.638 "thread": "nvmf_tgt_poll_group_000", 00:10:34.638 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:10:34.638 "listen_address": { 00:10:34.638 "trtype": "TCP", 00:10:34.638 "adrfam": "IPv4", 00:10:34.638 "traddr": "10.0.0.3", 00:10:34.638 "trsvcid": "4420" 00:10:34.638 }, 00:10:34.638 "peer_address": { 00:10:34.638 "trtype": "TCP", 00:10:34.638 "adrfam": "IPv4", 00:10:34.638 "traddr": "10.0.0.1", 00:10:34.638 "trsvcid": "47300" 00:10:34.638 }, 00:10:34.638 "auth": { 00:10:34.638 "state": "completed", 00:10:34.638 "digest": "sha256", 00:10:34.638 "dhgroup": "ffdhe4096" 00:10:34.638 } 00:10:34.638 } 00:10:34.638 ]' 00:10:34.638 09:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:34.896 09:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:34.896 09:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:34.896 09:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:34.896 09:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:34.896 09:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:34.896 09:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:34.897 09:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:35.155 09:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGM1ZTk2NmY2YWMyNDcxMDFhOWMxZGZmOTcyNmVmMzJkM2RjMWY2ODU2OTI5NjYxZDYxNDkxMjNiZWFmOGVmMrN9cww=: 00:10:35.155 09:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:03:MGM1ZTk2NmY2YWMyNDcxMDFhOWMxZGZmOTcyNmVmMzJkM2RjMWY2ODU2OTI5NjYxZDYxNDkxMjNiZWFmOGVmMrN9cww=: 00:10:35.722 09:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:35.722 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:35.722 09:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:10:35.722 09:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.722 09:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.722 09:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.722 09:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:35.722 09:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:35.722 09:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:35.722 09:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:35.982 09:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:10:35.982 09:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:35.982 09:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:35.982 09:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:35.982 09:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:35.982 09:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:35.982 09:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:35.982 09:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.982 09:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.982 09:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.982 09:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:35.982 09:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:35.982 09:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:36.241 00:10:36.500 09:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:36.500 09:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:36.500 09:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:36.500 09:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:36.500 09:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:36.500 09:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.500 09:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.500 09:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.500 09:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:36.500 { 00:10:36.500 "cntlid": 33, 00:10:36.500 "qid": 0, 00:10:36.500 "state": "enabled", 00:10:36.500 "thread": "nvmf_tgt_poll_group_000", 00:10:36.500 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:10:36.500 "listen_address": { 00:10:36.500 "trtype": "TCP", 00:10:36.500 "adrfam": "IPv4", 00:10:36.500 "traddr": "10.0.0.3", 00:10:36.500 "trsvcid": "4420" 00:10:36.500 }, 00:10:36.500 "peer_address": { 00:10:36.500 "trtype": "TCP", 00:10:36.500 "adrfam": "IPv4", 00:10:36.500 "traddr": "10.0.0.1", 00:10:36.500 "trsvcid": "47334" 00:10:36.500 }, 00:10:36.500 "auth": { 00:10:36.500 "state": "completed", 00:10:36.500 "digest": "sha256", 00:10:36.500 "dhgroup": "ffdhe6144" 00:10:36.500 } 00:10:36.500 } 00:10:36.500 ]' 00:10:36.500 09:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:36.759 09:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:36.759 09:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:36.759 09:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:36.759 09:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:36.759 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:36.759 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:36.759 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:37.018 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGUxYmE4MDI0ODdlYzk1ZjI0OGIzMmY5NzE1MGEzMTZlZGU1NDgxNjA1ODVhNzE3bCjLEA==: --dhchap-ctrl-secret DHHC-1:03:ZjZhY2QxOGQxNTQzY2M4NGMxYTk1MzRiYTUxNjllMTljNWU4YTVkNmVmNDUyODZmMjRjYTUyNTEyMDA1MGYxZkGtyhU=: 00:10:37.018 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:00:MGUxYmE4MDI0ODdlYzk1ZjI0OGIzMmY5NzE1MGEzMTZlZGU1NDgxNjA1ODVhNzE3bCjLEA==: --dhchap-ctrl-secret DHHC-1:03:ZjZhY2QxOGQxNTQzY2M4NGMxYTk1MzRiYTUxNjllMTljNWU4YTVkNmVmNDUyODZmMjRjYTUyNTEyMDA1MGYxZkGtyhU=: 00:10:37.586 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:37.586 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:37.586 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:10:37.586 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.586 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.586 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.586 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:37.586 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:37.586 09:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:37.845 09:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:10:37.845 09:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:37.845 09:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:37.845 09:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:37.845 09:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:37.845 09:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:37.845 09:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:37.845 09:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.845 09:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.845 09:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.845 09:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:37.845 09:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:37.845 09:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:38.470 00:10:38.470 09:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:38.470 09:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:38.470 09:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:38.729 09:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:38.729 09:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:38.729 09:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.729 09:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.729 09:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.729 09:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:38.729 { 00:10:38.729 "cntlid": 35, 00:10:38.729 "qid": 0, 00:10:38.729 "state": "enabled", 00:10:38.729 "thread": "nvmf_tgt_poll_group_000", 00:10:38.729 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:10:38.729 "listen_address": { 00:10:38.729 "trtype": "TCP", 00:10:38.729 "adrfam": "IPv4", 00:10:38.729 "traddr": "10.0.0.3", 00:10:38.729 "trsvcid": "4420" 00:10:38.729 }, 00:10:38.729 "peer_address": { 00:10:38.729 "trtype": "TCP", 00:10:38.729 "adrfam": "IPv4", 00:10:38.729 "traddr": "10.0.0.1", 00:10:38.729 "trsvcid": "47364" 00:10:38.729 }, 00:10:38.729 "auth": { 00:10:38.729 "state": "completed", 00:10:38.729 "digest": "sha256", 00:10:38.729 "dhgroup": "ffdhe6144" 00:10:38.729 } 00:10:38.729 } 00:10:38.729 ]' 00:10:38.729 09:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:38.729 09:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:38.729 09:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:38.729 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:38.729 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:38.729 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:38.729 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:38.729 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:39.297 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWQ0MDhjMDczNDdmMzk4NTYzN2Y1M2IwZjAyNDYyZjCKlYWr: --dhchap-ctrl-secret DHHC-1:02:MzE5OWQ5OGZjYmI1ZGFlMTViN2FhY2E1Nzc4NzU5NThkNGRkYmU3OGM2YjdiZmI5Mk3gaA==: 00:10:39.297 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:01:YWQ0MDhjMDczNDdmMzk4NTYzN2Y1M2IwZjAyNDYyZjCKlYWr: --dhchap-ctrl-secret DHHC-1:02:MzE5OWQ5OGZjYmI1ZGFlMTViN2FhY2E1Nzc4NzU5NThkNGRkYmU3OGM2YjdiZmI5Mk3gaA==: 00:10:39.556 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:39.556 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:39.556 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:10:39.814 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.814 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.814 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.814 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:39.814 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:39.814 09:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:40.074 09:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:10:40.074 09:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:40.074 09:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:40.074 09:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:40.074 09:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:40.074 09:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:40.074 09:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:40.074 09:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.074 09:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.074 09:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.074 09:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:40.074 09:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:40.074 09:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:40.333 00:10:40.333 09:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:40.333 09:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:40.333 09:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:40.899 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:40.899 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:40.899 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.899 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.899 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.899 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:40.899 { 00:10:40.899 "cntlid": 37, 00:10:40.899 "qid": 0, 00:10:40.899 "state": "enabled", 00:10:40.899 "thread": "nvmf_tgt_poll_group_000", 00:10:40.899 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:10:40.899 "listen_address": { 00:10:40.899 "trtype": "TCP", 00:10:40.899 "adrfam": "IPv4", 00:10:40.899 "traddr": "10.0.0.3", 00:10:40.899 "trsvcid": "4420" 00:10:40.899 }, 00:10:40.899 "peer_address": { 00:10:40.899 "trtype": "TCP", 00:10:40.899 "adrfam": "IPv4", 00:10:40.899 "traddr": "10.0.0.1", 00:10:40.899 "trsvcid": "47392" 00:10:40.899 }, 00:10:40.899 "auth": { 00:10:40.899 "state": "completed", 00:10:40.899 "digest": "sha256", 00:10:40.899 "dhgroup": "ffdhe6144" 00:10:40.899 } 00:10:40.899 } 00:10:40.899 ]' 00:10:40.899 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:40.899 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:40.899 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:40.900 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:40.900 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:40.900 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:40.900 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:40.900 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:41.158 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWRjNjI3ODhkYTMzZGYyZTg3ZTViMGNlNjE0MTdhODdhMDA5YWY2NmM2YzY0NmFiEctxFQ==: --dhchap-ctrl-secret DHHC-1:01:NDJkOTVmODBiNTlmNTFjMjQ4NGRkZWEwODA5NjFiNWT3vTE2: 00:10:41.158 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:02:YWRjNjI3ODhkYTMzZGYyZTg3ZTViMGNlNjE0MTdhODdhMDA5YWY2NmM2YzY0NmFiEctxFQ==: --dhchap-ctrl-secret DHHC-1:01:NDJkOTVmODBiNTlmNTFjMjQ4NGRkZWEwODA5NjFiNWT3vTE2: 00:10:41.725 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:41.725 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:41.725 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:10:41.725 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.725 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.725 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.725 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:41.725 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:41.725 09:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:41.984 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:10:41.984 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:41.984 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:41.984 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:41.984 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:41.984 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:41.984 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key3 00:10:41.984 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.984 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.984 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.984 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:41.984 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:41.984 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:42.552 00:10:42.552 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:42.552 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:42.552 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:42.552 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:42.552 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:42.552 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.552 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.552 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.552 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:42.552 { 00:10:42.552 "cntlid": 39, 00:10:42.552 "qid": 0, 00:10:42.552 "state": "enabled", 00:10:42.552 "thread": "nvmf_tgt_poll_group_000", 00:10:42.552 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:10:42.552 "listen_address": { 00:10:42.552 "trtype": "TCP", 00:10:42.552 "adrfam": "IPv4", 00:10:42.552 "traddr": "10.0.0.3", 00:10:42.552 "trsvcid": "4420" 00:10:42.552 }, 00:10:42.552 "peer_address": { 00:10:42.552 "trtype": "TCP", 00:10:42.552 "adrfam": "IPv4", 00:10:42.552 "traddr": "10.0.0.1", 00:10:42.552 "trsvcid": "47436" 00:10:42.552 }, 00:10:42.552 "auth": { 00:10:42.552 "state": "completed", 00:10:42.552 "digest": "sha256", 00:10:42.552 "dhgroup": "ffdhe6144" 00:10:42.552 } 00:10:42.552 } 00:10:42.552 ]' 00:10:42.552 09:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:42.811 09:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:42.811 09:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:42.811 09:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:42.811 09:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:42.811 09:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:42.811 09:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:42.811 09:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:43.069 09:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGM1ZTk2NmY2YWMyNDcxMDFhOWMxZGZmOTcyNmVmMzJkM2RjMWY2ODU2OTI5NjYxZDYxNDkxMjNiZWFmOGVmMrN9cww=: 00:10:43.069 09:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:03:MGM1ZTk2NmY2YWMyNDcxMDFhOWMxZGZmOTcyNmVmMzJkM2RjMWY2ODU2OTI5NjYxZDYxNDkxMjNiZWFmOGVmMrN9cww=: 00:10:43.636 09:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:43.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:43.636 09:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:10:43.636 09:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.636 09:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.636 09:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.636 09:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:43.636 09:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:43.636 09:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:43.636 09:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:43.895 09:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:10:43.895 09:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:43.895 09:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:43.895 09:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:43.895 09:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:43.895 09:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:43.895 09:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:43.895 09:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.895 09:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.895 09:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.895 09:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:43.895 09:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:43.895 09:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:44.463 00:10:44.463 09:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:44.463 09:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:44.463 09:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:44.734 09:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:44.734 09:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:44.734 09:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.734 09:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.734 09:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.734 09:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:44.734 { 00:10:44.734 "cntlid": 41, 00:10:44.734 "qid": 0, 00:10:44.734 "state": "enabled", 00:10:44.734 "thread": "nvmf_tgt_poll_group_000", 00:10:44.734 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:10:44.734 "listen_address": { 00:10:44.734 "trtype": "TCP", 00:10:44.734 "adrfam": "IPv4", 00:10:44.734 "traddr": "10.0.0.3", 00:10:44.734 "trsvcid": "4420" 00:10:44.734 }, 00:10:44.734 "peer_address": { 00:10:44.734 "trtype": "TCP", 00:10:44.734 "adrfam": "IPv4", 00:10:44.734 "traddr": "10.0.0.1", 00:10:44.734 "trsvcid": "43952" 00:10:44.734 }, 00:10:44.734 "auth": { 00:10:44.734 "state": "completed", 00:10:44.734 "digest": "sha256", 00:10:44.734 "dhgroup": "ffdhe8192" 00:10:44.734 } 00:10:44.734 } 00:10:44.734 ]' 00:10:44.734 09:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:44.999 09:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:44.999 09:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:44.999 09:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:44.999 09:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:44.999 09:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:44.999 09:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:44.999 09:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:45.257 09:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGUxYmE4MDI0ODdlYzk1ZjI0OGIzMmY5NzE1MGEzMTZlZGU1NDgxNjA1ODVhNzE3bCjLEA==: --dhchap-ctrl-secret DHHC-1:03:ZjZhY2QxOGQxNTQzY2M4NGMxYTk1MzRiYTUxNjllMTljNWU4YTVkNmVmNDUyODZmMjRjYTUyNTEyMDA1MGYxZkGtyhU=: 00:10:45.257 09:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:00:MGUxYmE4MDI0ODdlYzk1ZjI0OGIzMmY5NzE1MGEzMTZlZGU1NDgxNjA1ODVhNzE3bCjLEA==: --dhchap-ctrl-secret DHHC-1:03:ZjZhY2QxOGQxNTQzY2M4NGMxYTk1MzRiYTUxNjllMTljNWU4YTVkNmVmNDUyODZmMjRjYTUyNTEyMDA1MGYxZkGtyhU=: 00:10:45.824 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:45.824 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:45.824 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:10:45.824 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.824 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.824 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.824 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:45.824 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:45.824 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:46.083 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:10:46.083 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:46.083 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:46.083 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:46.083 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:46.083 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:46.083 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:46.083 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.083 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.083 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.083 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:46.083 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:46.083 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:46.650 00:10:46.650 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:46.650 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:46.650 09:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:46.909 09:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:46.909 09:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:46.909 09:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.909 09:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.909 09:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.909 09:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:46.909 { 00:10:46.909 "cntlid": 43, 00:10:46.909 "qid": 0, 00:10:46.909 "state": "enabled", 00:10:46.909 "thread": "nvmf_tgt_poll_group_000", 00:10:46.909 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:10:46.909 "listen_address": { 00:10:46.909 "trtype": "TCP", 00:10:46.909 "adrfam": "IPv4", 00:10:46.909 "traddr": "10.0.0.3", 00:10:46.909 "trsvcid": "4420" 00:10:46.909 }, 00:10:46.909 "peer_address": { 00:10:46.909 "trtype": "TCP", 00:10:46.909 "adrfam": "IPv4", 00:10:46.909 "traddr": "10.0.0.1", 00:10:46.909 "trsvcid": "43972" 00:10:46.909 }, 00:10:46.909 "auth": { 00:10:46.909 "state": "completed", 00:10:46.909 "digest": "sha256", 00:10:46.909 "dhgroup": "ffdhe8192" 00:10:46.909 } 00:10:46.909 } 00:10:46.909 ]' 00:10:46.909 09:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:46.909 09:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:46.909 09:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:46.909 09:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:46.909 09:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:46.909 09:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:46.909 09:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:46.909 09:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:47.476 09:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWQ0MDhjMDczNDdmMzk4NTYzN2Y1M2IwZjAyNDYyZjCKlYWr: --dhchap-ctrl-secret DHHC-1:02:MzE5OWQ5OGZjYmI1ZGFlMTViN2FhY2E1Nzc4NzU5NThkNGRkYmU3OGM2YjdiZmI5Mk3gaA==: 00:10:47.476 09:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:01:YWQ0MDhjMDczNDdmMzk4NTYzN2Y1M2IwZjAyNDYyZjCKlYWr: --dhchap-ctrl-secret DHHC-1:02:MzE5OWQ5OGZjYmI1ZGFlMTViN2FhY2E1Nzc4NzU5NThkNGRkYmU3OGM2YjdiZmI5Mk3gaA==: 00:10:48.043 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:48.043 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:48.043 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:10:48.043 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.043 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.043 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.043 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:48.043 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:48.043 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:48.302 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:10:48.302 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:48.302 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:48.302 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:48.303 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:48.303 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:48.303 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:48.303 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.303 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.303 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.303 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:48.303 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:48.303 09:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:48.870 00:10:48.870 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:48.870 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:48.870 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:49.129 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:49.129 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:49.129 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.129 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.129 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.129 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:49.129 { 00:10:49.129 "cntlid": 45, 00:10:49.129 "qid": 0, 00:10:49.129 "state": "enabled", 00:10:49.129 "thread": "nvmf_tgt_poll_group_000", 00:10:49.129 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:10:49.129 "listen_address": { 00:10:49.129 "trtype": "TCP", 00:10:49.129 "adrfam": "IPv4", 00:10:49.129 "traddr": "10.0.0.3", 00:10:49.129 "trsvcid": "4420" 00:10:49.129 }, 00:10:49.129 "peer_address": { 00:10:49.129 "trtype": "TCP", 00:10:49.129 "adrfam": "IPv4", 00:10:49.129 "traddr": "10.0.0.1", 00:10:49.129 "trsvcid": "44006" 00:10:49.129 }, 00:10:49.129 "auth": { 00:10:49.129 "state": "completed", 00:10:49.129 "digest": "sha256", 00:10:49.129 "dhgroup": "ffdhe8192" 00:10:49.129 } 00:10:49.129 } 00:10:49.129 ]' 00:10:49.129 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:49.129 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:49.129 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:49.129 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:49.129 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:49.129 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:49.129 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:49.130 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:49.388 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWRjNjI3ODhkYTMzZGYyZTg3ZTViMGNlNjE0MTdhODdhMDA5YWY2NmM2YzY0NmFiEctxFQ==: --dhchap-ctrl-secret DHHC-1:01:NDJkOTVmODBiNTlmNTFjMjQ4NGRkZWEwODA5NjFiNWT3vTE2: 00:10:49.388 09:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:02:YWRjNjI3ODhkYTMzZGYyZTg3ZTViMGNlNjE0MTdhODdhMDA5YWY2NmM2YzY0NmFiEctxFQ==: --dhchap-ctrl-secret DHHC-1:01:NDJkOTVmODBiNTlmNTFjMjQ4NGRkZWEwODA5NjFiNWT3vTE2: 00:10:50.325 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:50.325 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:50.325 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:10:50.325 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.325 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.325 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.325 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:50.325 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:50.325 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:50.325 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:10:50.325 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:50.325 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:50.325 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:50.325 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:50.325 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:50.325 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key3 00:10:50.325 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.325 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.325 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.325 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:50.325 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:50.325 09:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:50.892 00:10:50.892 09:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:50.892 09:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:50.892 09:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:51.151 09:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:51.151 09:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:51.151 09:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.151 09:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.151 09:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.151 09:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:51.151 { 00:10:51.151 "cntlid": 47, 00:10:51.151 "qid": 0, 00:10:51.151 "state": "enabled", 00:10:51.151 "thread": "nvmf_tgt_poll_group_000", 00:10:51.151 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:10:51.151 "listen_address": { 00:10:51.151 "trtype": "TCP", 00:10:51.151 "adrfam": "IPv4", 00:10:51.151 "traddr": "10.0.0.3", 00:10:51.151 "trsvcid": "4420" 00:10:51.151 }, 00:10:51.151 "peer_address": { 00:10:51.151 "trtype": "TCP", 00:10:51.151 "adrfam": "IPv4", 00:10:51.151 "traddr": "10.0.0.1", 00:10:51.151 "trsvcid": "44024" 00:10:51.151 }, 00:10:51.151 "auth": { 00:10:51.151 "state": "completed", 00:10:51.151 "digest": "sha256", 00:10:51.151 "dhgroup": "ffdhe8192" 00:10:51.151 } 00:10:51.151 } 00:10:51.151 ]' 00:10:51.151 09:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:51.410 09:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:51.410 09:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:51.410 09:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:51.410 09:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:51.410 09:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:51.410 09:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:51.410 09:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:51.669 09:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGM1ZTk2NmY2YWMyNDcxMDFhOWMxZGZmOTcyNmVmMzJkM2RjMWY2ODU2OTI5NjYxZDYxNDkxMjNiZWFmOGVmMrN9cww=: 00:10:51.669 09:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:03:MGM1ZTk2NmY2YWMyNDcxMDFhOWMxZGZmOTcyNmVmMzJkM2RjMWY2ODU2OTI5NjYxZDYxNDkxMjNiZWFmOGVmMrN9cww=: 00:10:52.237 09:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:52.237 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:52.237 09:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:10:52.237 09:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.237 09:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.237 09:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.237 09:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:10:52.237 09:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:52.237 09:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:52.237 09:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:52.237 09:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:52.496 09:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:10:52.496 09:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:52.496 09:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:52.496 09:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:52.496 09:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:52.496 09:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:52.496 09:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:52.496 09:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.496 09:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.496 09:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.496 09:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:52.496 09:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:52.496 09:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:52.755 00:10:52.755 09:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:52.755 09:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:52.755 09:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:53.018 09:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:53.018 09:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:53.018 09:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.018 09:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.018 09:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.018 09:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:53.018 { 00:10:53.018 "cntlid": 49, 00:10:53.018 "qid": 0, 00:10:53.018 "state": "enabled", 00:10:53.018 "thread": "nvmf_tgt_poll_group_000", 00:10:53.018 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:10:53.018 "listen_address": { 00:10:53.018 "trtype": "TCP", 00:10:53.018 "adrfam": "IPv4", 00:10:53.018 "traddr": "10.0.0.3", 00:10:53.018 "trsvcid": "4420" 00:10:53.018 }, 00:10:53.018 "peer_address": { 00:10:53.018 "trtype": "TCP", 00:10:53.018 "adrfam": "IPv4", 00:10:53.018 "traddr": "10.0.0.1", 00:10:53.018 "trsvcid": "44060" 00:10:53.018 }, 00:10:53.018 "auth": { 00:10:53.018 "state": "completed", 00:10:53.018 "digest": "sha384", 00:10:53.018 "dhgroup": "null" 00:10:53.018 } 00:10:53.018 } 00:10:53.018 ]' 00:10:53.018 09:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:53.018 09:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:53.018 09:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:53.276 09:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:53.276 09:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:53.276 09:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:53.276 09:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:53.276 09:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:53.535 09:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGUxYmE4MDI0ODdlYzk1ZjI0OGIzMmY5NzE1MGEzMTZlZGU1NDgxNjA1ODVhNzE3bCjLEA==: --dhchap-ctrl-secret DHHC-1:03:ZjZhY2QxOGQxNTQzY2M4NGMxYTk1MzRiYTUxNjllMTljNWU4YTVkNmVmNDUyODZmMjRjYTUyNTEyMDA1MGYxZkGtyhU=: 00:10:53.535 09:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:00:MGUxYmE4MDI0ODdlYzk1ZjI0OGIzMmY5NzE1MGEzMTZlZGU1NDgxNjA1ODVhNzE3bCjLEA==: --dhchap-ctrl-secret DHHC-1:03:ZjZhY2QxOGQxNTQzY2M4NGMxYTk1MzRiYTUxNjllMTljNWU4YTVkNmVmNDUyODZmMjRjYTUyNTEyMDA1MGYxZkGtyhU=: 00:10:54.102 09:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:54.102 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:54.102 09:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:10:54.102 09:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.102 09:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.102 09:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.102 09:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:54.102 09:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:54.103 09:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:54.361 09:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:10:54.361 09:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:54.361 09:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:54.361 09:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:54.361 09:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:54.361 09:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:54.361 09:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:54.361 09:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.361 09:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.361 09:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.361 09:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:54.361 09:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:54.361 09:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:54.620 00:10:54.620 09:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:54.620 09:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:54.620 09:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:54.890 09:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:54.890 09:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:54.890 09:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.890 09:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.890 09:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.890 09:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:54.890 { 00:10:54.890 "cntlid": 51, 00:10:54.890 "qid": 0, 00:10:54.890 "state": "enabled", 00:10:54.890 "thread": "nvmf_tgt_poll_group_000", 00:10:54.890 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:10:54.890 "listen_address": { 00:10:54.890 "trtype": "TCP", 00:10:54.890 "adrfam": "IPv4", 00:10:54.890 "traddr": "10.0.0.3", 00:10:54.890 "trsvcid": "4420" 00:10:54.890 }, 00:10:54.890 "peer_address": { 00:10:54.890 "trtype": "TCP", 00:10:54.890 "adrfam": "IPv4", 00:10:54.890 "traddr": "10.0.0.1", 00:10:54.890 "trsvcid": "39600" 00:10:54.890 }, 00:10:54.890 "auth": { 00:10:54.890 "state": "completed", 00:10:54.890 "digest": "sha384", 00:10:54.890 "dhgroup": "null" 00:10:54.890 } 00:10:54.890 } 00:10:54.890 ]' 00:10:54.890 09:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:54.890 09:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:54.890 09:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:55.149 09:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:55.149 09:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:55.149 09:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:55.149 09:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:55.149 09:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:55.408 09:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWQ0MDhjMDczNDdmMzk4NTYzN2Y1M2IwZjAyNDYyZjCKlYWr: --dhchap-ctrl-secret DHHC-1:02:MzE5OWQ5OGZjYmI1ZGFlMTViN2FhY2E1Nzc4NzU5NThkNGRkYmU3OGM2YjdiZmI5Mk3gaA==: 00:10:55.408 09:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:01:YWQ0MDhjMDczNDdmMzk4NTYzN2Y1M2IwZjAyNDYyZjCKlYWr: --dhchap-ctrl-secret DHHC-1:02:MzE5OWQ5OGZjYmI1ZGFlMTViN2FhY2E1Nzc4NzU5NThkNGRkYmU3OGM2YjdiZmI5Mk3gaA==: 00:10:55.975 09:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:55.975 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:55.975 09:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:10:55.975 09:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.975 09:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.975 09:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.975 09:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:55.975 09:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:55.975 09:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:56.234 09:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:10:56.234 09:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:56.234 09:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:56.234 09:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:56.234 09:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:56.234 09:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:56.234 09:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:56.234 09:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.234 09:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.234 09:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.234 09:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:56.234 09:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:56.234 09:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:56.494 00:10:56.494 09:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:56.494 09:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:56.494 09:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:56.752 09:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:56.752 09:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:56.752 09:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.752 09:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.752 09:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.752 09:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:56.752 { 00:10:56.752 "cntlid": 53, 00:10:56.752 "qid": 0, 00:10:56.752 "state": "enabled", 00:10:56.752 "thread": "nvmf_tgt_poll_group_000", 00:10:56.752 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:10:56.752 "listen_address": { 00:10:56.752 "trtype": "TCP", 00:10:56.752 "adrfam": "IPv4", 00:10:56.752 "traddr": "10.0.0.3", 00:10:56.752 "trsvcid": "4420" 00:10:56.752 }, 00:10:56.752 "peer_address": { 00:10:56.752 "trtype": "TCP", 00:10:56.752 "adrfam": "IPv4", 00:10:56.752 "traddr": "10.0.0.1", 00:10:56.752 "trsvcid": "39638" 00:10:56.752 }, 00:10:56.752 "auth": { 00:10:56.752 "state": "completed", 00:10:56.752 "digest": "sha384", 00:10:56.752 "dhgroup": "null" 00:10:56.752 } 00:10:56.752 } 00:10:56.752 ]' 00:10:56.752 09:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:56.752 09:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:56.752 09:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:56.752 09:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:56.752 09:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:57.011 09:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:57.011 09:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:57.011 09:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:57.270 09:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWRjNjI3ODhkYTMzZGYyZTg3ZTViMGNlNjE0MTdhODdhMDA5YWY2NmM2YzY0NmFiEctxFQ==: --dhchap-ctrl-secret DHHC-1:01:NDJkOTVmODBiNTlmNTFjMjQ4NGRkZWEwODA5NjFiNWT3vTE2: 00:10:57.270 09:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:02:YWRjNjI3ODhkYTMzZGYyZTg3ZTViMGNlNjE0MTdhODdhMDA5YWY2NmM2YzY0NmFiEctxFQ==: --dhchap-ctrl-secret DHHC-1:01:NDJkOTVmODBiNTlmNTFjMjQ4NGRkZWEwODA5NjFiNWT3vTE2: 00:10:57.836 09:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:57.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:57.836 09:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:10:57.836 09:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.836 09:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.836 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.836 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:57.836 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:57.836 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:58.095 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:10:58.095 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:58.095 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:58.095 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:58.095 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:58.095 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:58.095 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key3 00:10:58.095 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.095 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.095 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.095 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:58.095 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:58.095 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:58.354 00:10:58.354 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:58.354 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:58.354 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:58.613 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:58.613 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:58.613 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.613 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.613 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.613 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:58.613 { 00:10:58.613 "cntlid": 55, 00:10:58.613 "qid": 0, 00:10:58.613 "state": "enabled", 00:10:58.613 "thread": "nvmf_tgt_poll_group_000", 00:10:58.613 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:10:58.613 "listen_address": { 00:10:58.613 "trtype": "TCP", 00:10:58.613 "adrfam": "IPv4", 00:10:58.613 "traddr": "10.0.0.3", 00:10:58.613 "trsvcid": "4420" 00:10:58.613 }, 00:10:58.613 "peer_address": { 00:10:58.613 "trtype": "TCP", 00:10:58.613 "adrfam": "IPv4", 00:10:58.613 "traddr": "10.0.0.1", 00:10:58.613 "trsvcid": "39660" 00:10:58.613 }, 00:10:58.613 "auth": { 00:10:58.613 "state": "completed", 00:10:58.613 "digest": "sha384", 00:10:58.613 "dhgroup": "null" 00:10:58.613 } 00:10:58.613 } 00:10:58.613 ]' 00:10:58.613 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:58.613 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:58.613 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:58.613 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:58.613 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:58.613 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:58.613 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:58.613 09:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:58.872 09:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGM1ZTk2NmY2YWMyNDcxMDFhOWMxZGZmOTcyNmVmMzJkM2RjMWY2ODU2OTI5NjYxZDYxNDkxMjNiZWFmOGVmMrN9cww=: 00:10:58.872 09:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:03:MGM1ZTk2NmY2YWMyNDcxMDFhOWMxZGZmOTcyNmVmMzJkM2RjMWY2ODU2OTI5NjYxZDYxNDkxMjNiZWFmOGVmMrN9cww=: 00:10:59.439 09:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:59.439 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:59.439 09:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:10:59.439 09:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.439 09:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.439 09:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.439 09:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:59.439 09:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:59.439 09:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:59.439 09:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:59.698 09:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:10:59.698 09:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:59.698 09:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:59.698 09:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:59.699 09:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:59.699 09:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:59.699 09:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:59.699 09:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.699 09:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.699 09:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.699 09:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:59.699 09:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:59.699 09:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:00.267 00:11:00.267 09:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:00.267 09:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:00.267 09:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:00.527 09:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:00.527 09:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:00.527 09:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.527 09:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.527 09:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.527 09:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:00.527 { 00:11:00.527 "cntlid": 57, 00:11:00.527 "qid": 0, 00:11:00.527 "state": "enabled", 00:11:00.527 "thread": "nvmf_tgt_poll_group_000", 00:11:00.527 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:11:00.527 "listen_address": { 00:11:00.527 "trtype": "TCP", 00:11:00.527 "adrfam": "IPv4", 00:11:00.527 "traddr": "10.0.0.3", 00:11:00.527 "trsvcid": "4420" 00:11:00.527 }, 00:11:00.527 "peer_address": { 00:11:00.527 "trtype": "TCP", 00:11:00.527 "adrfam": "IPv4", 00:11:00.527 "traddr": "10.0.0.1", 00:11:00.527 "trsvcid": "39686" 00:11:00.527 }, 00:11:00.527 "auth": { 00:11:00.527 "state": "completed", 00:11:00.527 "digest": "sha384", 00:11:00.527 "dhgroup": "ffdhe2048" 00:11:00.527 } 00:11:00.527 } 00:11:00.527 ]' 00:11:00.527 09:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:00.527 09:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:00.527 09:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:00.527 09:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:00.527 09:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:00.527 09:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:00.527 09:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:00.527 09:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:00.786 09:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGUxYmE4MDI0ODdlYzk1ZjI0OGIzMmY5NzE1MGEzMTZlZGU1NDgxNjA1ODVhNzE3bCjLEA==: --dhchap-ctrl-secret DHHC-1:03:ZjZhY2QxOGQxNTQzY2M4NGMxYTk1MzRiYTUxNjllMTljNWU4YTVkNmVmNDUyODZmMjRjYTUyNTEyMDA1MGYxZkGtyhU=: 00:11:00.786 09:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:00:MGUxYmE4MDI0ODdlYzk1ZjI0OGIzMmY5NzE1MGEzMTZlZGU1NDgxNjA1ODVhNzE3bCjLEA==: --dhchap-ctrl-secret DHHC-1:03:ZjZhY2QxOGQxNTQzY2M4NGMxYTk1MzRiYTUxNjllMTljNWU4YTVkNmVmNDUyODZmMjRjYTUyNTEyMDA1MGYxZkGtyhU=: 00:11:01.353 09:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:01.612 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:01.612 09:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:11:01.612 09:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.612 09:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.612 09:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.612 09:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:01.612 09:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:01.612 09:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:01.871 09:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:11:01.871 09:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:01.871 09:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:01.871 09:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:01.871 09:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:01.871 09:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:01.871 09:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:01.871 09:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.871 09:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.871 09:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.871 09:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:01.871 09:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:01.871 09:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:02.130 00:11:02.130 09:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:02.130 09:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:02.130 09:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:02.389 09:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:02.389 09:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:02.389 09:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.389 09:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.389 09:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.389 09:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:02.389 { 00:11:02.389 "cntlid": 59, 00:11:02.389 "qid": 0, 00:11:02.389 "state": "enabled", 00:11:02.389 "thread": "nvmf_tgt_poll_group_000", 00:11:02.389 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:11:02.389 "listen_address": { 00:11:02.389 "trtype": "TCP", 00:11:02.389 "adrfam": "IPv4", 00:11:02.389 "traddr": "10.0.0.3", 00:11:02.389 "trsvcid": "4420" 00:11:02.389 }, 00:11:02.389 "peer_address": { 00:11:02.389 "trtype": "TCP", 00:11:02.389 "adrfam": "IPv4", 00:11:02.389 "traddr": "10.0.0.1", 00:11:02.389 "trsvcid": "39728" 00:11:02.389 }, 00:11:02.389 "auth": { 00:11:02.389 "state": "completed", 00:11:02.389 "digest": "sha384", 00:11:02.389 "dhgroup": "ffdhe2048" 00:11:02.389 } 00:11:02.389 } 00:11:02.389 ]' 00:11:02.389 09:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:02.390 09:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:02.390 09:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:02.390 09:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:02.390 09:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:02.648 09:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:02.648 09:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:02.648 09:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:02.907 09:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWQ0MDhjMDczNDdmMzk4NTYzN2Y1M2IwZjAyNDYyZjCKlYWr: --dhchap-ctrl-secret DHHC-1:02:MzE5OWQ5OGZjYmI1ZGFlMTViN2FhY2E1Nzc4NzU5NThkNGRkYmU3OGM2YjdiZmI5Mk3gaA==: 00:11:02.907 09:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:01:YWQ0MDhjMDczNDdmMzk4NTYzN2Y1M2IwZjAyNDYyZjCKlYWr: --dhchap-ctrl-secret DHHC-1:02:MzE5OWQ5OGZjYmI1ZGFlMTViN2FhY2E1Nzc4NzU5NThkNGRkYmU3OGM2YjdiZmI5Mk3gaA==: 00:11:03.473 09:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:03.473 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:03.474 09:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:11:03.474 09:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.474 09:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.474 09:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.474 09:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:03.474 09:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:03.474 09:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:03.732 09:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:11:03.732 09:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:03.732 09:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:03.732 09:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:03.732 09:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:03.732 09:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:03.732 09:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:03.732 09:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.732 09:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.732 09:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.732 09:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:03.732 09:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:03.732 09:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:03.991 00:11:03.991 09:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:03.991 09:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:03.991 09:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:04.558 09:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:04.558 09:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:04.558 09:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.558 09:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.558 09:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.558 09:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:04.558 { 00:11:04.558 "cntlid": 61, 00:11:04.558 "qid": 0, 00:11:04.558 "state": "enabled", 00:11:04.558 "thread": "nvmf_tgt_poll_group_000", 00:11:04.558 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:11:04.558 "listen_address": { 00:11:04.558 "trtype": "TCP", 00:11:04.558 "adrfam": "IPv4", 00:11:04.558 "traddr": "10.0.0.3", 00:11:04.558 "trsvcid": "4420" 00:11:04.558 }, 00:11:04.558 "peer_address": { 00:11:04.558 "trtype": "TCP", 00:11:04.558 "adrfam": "IPv4", 00:11:04.558 "traddr": "10.0.0.1", 00:11:04.558 "trsvcid": "55454" 00:11:04.558 }, 00:11:04.558 "auth": { 00:11:04.558 "state": "completed", 00:11:04.558 "digest": "sha384", 00:11:04.558 "dhgroup": "ffdhe2048" 00:11:04.558 } 00:11:04.558 } 00:11:04.558 ]' 00:11:04.558 09:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:04.558 09:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:04.558 09:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:04.558 09:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:04.558 09:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:04.558 09:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:04.559 09:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:04.559 09:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:04.817 09:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWRjNjI3ODhkYTMzZGYyZTg3ZTViMGNlNjE0MTdhODdhMDA5YWY2NmM2YzY0NmFiEctxFQ==: --dhchap-ctrl-secret DHHC-1:01:NDJkOTVmODBiNTlmNTFjMjQ4NGRkZWEwODA5NjFiNWT3vTE2: 00:11:04.817 09:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:02:YWRjNjI3ODhkYTMzZGYyZTg3ZTViMGNlNjE0MTdhODdhMDA5YWY2NmM2YzY0NmFiEctxFQ==: --dhchap-ctrl-secret DHHC-1:01:NDJkOTVmODBiNTlmNTFjMjQ4NGRkZWEwODA5NjFiNWT3vTE2: 00:11:05.384 09:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:05.384 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:05.384 09:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:11:05.384 09:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.384 09:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.384 09:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.384 09:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:05.384 09:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:05.384 09:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:05.644 09:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:11:05.644 09:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:05.644 09:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:05.644 09:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:05.644 09:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:05.644 09:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:05.644 09:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key3 00:11:05.644 09:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.644 09:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.644 09:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.644 09:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:05.644 09:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:05.644 09:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:06.209 00:11:06.209 09:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:06.209 09:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:06.209 09:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:06.468 09:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:06.468 09:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:06.468 09:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.468 09:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.468 09:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.468 09:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:06.468 { 00:11:06.468 "cntlid": 63, 00:11:06.468 "qid": 0, 00:11:06.468 "state": "enabled", 00:11:06.468 "thread": "nvmf_tgt_poll_group_000", 00:11:06.468 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:11:06.468 "listen_address": { 00:11:06.468 "trtype": "TCP", 00:11:06.468 "adrfam": "IPv4", 00:11:06.468 "traddr": "10.0.0.3", 00:11:06.468 "trsvcid": "4420" 00:11:06.468 }, 00:11:06.468 "peer_address": { 00:11:06.468 "trtype": "TCP", 00:11:06.468 "adrfam": "IPv4", 00:11:06.468 "traddr": "10.0.0.1", 00:11:06.468 "trsvcid": "55482" 00:11:06.468 }, 00:11:06.468 "auth": { 00:11:06.468 "state": "completed", 00:11:06.468 "digest": "sha384", 00:11:06.468 "dhgroup": "ffdhe2048" 00:11:06.468 } 00:11:06.468 } 00:11:06.468 ]' 00:11:06.468 09:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:06.468 09:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:06.468 09:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:06.468 09:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:06.468 09:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:06.468 09:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:06.468 09:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:06.468 09:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:06.726 09:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGM1ZTk2NmY2YWMyNDcxMDFhOWMxZGZmOTcyNmVmMzJkM2RjMWY2ODU2OTI5NjYxZDYxNDkxMjNiZWFmOGVmMrN9cww=: 00:11:06.726 09:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:03:MGM1ZTk2NmY2YWMyNDcxMDFhOWMxZGZmOTcyNmVmMzJkM2RjMWY2ODU2OTI5NjYxZDYxNDkxMjNiZWFmOGVmMrN9cww=: 00:11:07.660 09:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:07.660 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:07.660 09:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:11:07.660 09:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.660 09:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.660 09:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.660 09:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:07.660 09:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:07.660 09:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:07.660 09:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:07.919 09:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:11:07.919 09:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:07.919 09:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:07.919 09:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:07.919 09:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:07.919 09:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:07.919 09:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:07.919 09:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.919 09:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.919 09:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.919 09:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:07.919 09:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:07.919 09:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:08.178 00:11:08.178 09:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:08.178 09:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:08.178 09:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:08.436 09:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:08.436 09:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:08.436 09:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.436 09:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.436 09:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.436 09:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:08.436 { 00:11:08.436 "cntlid": 65, 00:11:08.436 "qid": 0, 00:11:08.436 "state": "enabled", 00:11:08.436 "thread": "nvmf_tgt_poll_group_000", 00:11:08.436 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:11:08.436 "listen_address": { 00:11:08.436 "trtype": "TCP", 00:11:08.436 "adrfam": "IPv4", 00:11:08.436 "traddr": "10.0.0.3", 00:11:08.436 "trsvcid": "4420" 00:11:08.436 }, 00:11:08.436 "peer_address": { 00:11:08.436 "trtype": "TCP", 00:11:08.436 "adrfam": "IPv4", 00:11:08.436 "traddr": "10.0.0.1", 00:11:08.436 "trsvcid": "55516" 00:11:08.436 }, 00:11:08.436 "auth": { 00:11:08.436 "state": "completed", 00:11:08.436 "digest": "sha384", 00:11:08.436 "dhgroup": "ffdhe3072" 00:11:08.436 } 00:11:08.436 } 00:11:08.436 ]' 00:11:08.436 09:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:08.436 09:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:08.436 09:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:08.436 09:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:08.436 09:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:08.694 09:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:08.694 09:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:08.694 09:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:08.694 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGUxYmE4MDI0ODdlYzk1ZjI0OGIzMmY5NzE1MGEzMTZlZGU1NDgxNjA1ODVhNzE3bCjLEA==: --dhchap-ctrl-secret DHHC-1:03:ZjZhY2QxOGQxNTQzY2M4NGMxYTk1MzRiYTUxNjllMTljNWU4YTVkNmVmNDUyODZmMjRjYTUyNTEyMDA1MGYxZkGtyhU=: 00:11:08.694 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:00:MGUxYmE4MDI0ODdlYzk1ZjI0OGIzMmY5NzE1MGEzMTZlZGU1NDgxNjA1ODVhNzE3bCjLEA==: --dhchap-ctrl-secret DHHC-1:03:ZjZhY2QxOGQxNTQzY2M4NGMxYTk1MzRiYTUxNjllMTljNWU4YTVkNmVmNDUyODZmMjRjYTUyNTEyMDA1MGYxZkGtyhU=: 00:11:09.261 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:09.261 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:09.261 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:11:09.261 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.261 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.261 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.261 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:09.261 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:09.261 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:09.828 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:11:09.828 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:09.828 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:09.828 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:09.828 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:09.828 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:09.828 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:09.829 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.829 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.829 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.829 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:09.829 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:09.829 09:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:10.087 00:11:10.087 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:10.087 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:10.087 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:10.346 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:10.346 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:10.346 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.346 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.346 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.346 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:10.346 { 00:11:10.346 "cntlid": 67, 00:11:10.346 "qid": 0, 00:11:10.346 "state": "enabled", 00:11:10.346 "thread": "nvmf_tgt_poll_group_000", 00:11:10.346 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:11:10.346 "listen_address": { 00:11:10.346 "trtype": "TCP", 00:11:10.346 "adrfam": "IPv4", 00:11:10.346 "traddr": "10.0.0.3", 00:11:10.346 "trsvcid": "4420" 00:11:10.346 }, 00:11:10.346 "peer_address": { 00:11:10.346 "trtype": "TCP", 00:11:10.346 "adrfam": "IPv4", 00:11:10.346 "traddr": "10.0.0.1", 00:11:10.346 "trsvcid": "55552" 00:11:10.346 }, 00:11:10.346 "auth": { 00:11:10.346 "state": "completed", 00:11:10.346 "digest": "sha384", 00:11:10.346 "dhgroup": "ffdhe3072" 00:11:10.346 } 00:11:10.346 } 00:11:10.346 ]' 00:11:10.346 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:10.346 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:10.346 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:10.346 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:10.346 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:10.346 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:10.346 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:10.346 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:10.605 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWQ0MDhjMDczNDdmMzk4NTYzN2Y1M2IwZjAyNDYyZjCKlYWr: --dhchap-ctrl-secret DHHC-1:02:MzE5OWQ5OGZjYmI1ZGFlMTViN2FhY2E1Nzc4NzU5NThkNGRkYmU3OGM2YjdiZmI5Mk3gaA==: 00:11:10.605 09:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:01:YWQ0MDhjMDczNDdmMzk4NTYzN2Y1M2IwZjAyNDYyZjCKlYWr: --dhchap-ctrl-secret DHHC-1:02:MzE5OWQ5OGZjYmI1ZGFlMTViN2FhY2E1Nzc4NzU5NThkNGRkYmU3OGM2YjdiZmI5Mk3gaA==: 00:11:11.231 09:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:11.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:11.231 09:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:11:11.231 09:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.231 09:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.231 09:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.231 09:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:11.231 09:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:11.231 09:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:11.798 09:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:11:11.798 09:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:11.798 09:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:11.798 09:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:11.798 09:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:11.798 09:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:11.798 09:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:11.798 09:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.798 09:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.798 09:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.798 09:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:11.799 09:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:11.799 09:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:12.057 00:11:12.057 09:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:12.057 09:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:12.058 09:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:12.316 09:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:12.316 09:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:12.316 09:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.316 09:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.316 09:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.316 09:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:12.316 { 00:11:12.316 "cntlid": 69, 00:11:12.316 "qid": 0, 00:11:12.316 "state": "enabled", 00:11:12.316 "thread": "nvmf_tgt_poll_group_000", 00:11:12.316 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:11:12.316 "listen_address": { 00:11:12.316 "trtype": "TCP", 00:11:12.316 "adrfam": "IPv4", 00:11:12.316 "traddr": "10.0.0.3", 00:11:12.316 "trsvcid": "4420" 00:11:12.316 }, 00:11:12.316 "peer_address": { 00:11:12.316 "trtype": "TCP", 00:11:12.316 "adrfam": "IPv4", 00:11:12.316 "traddr": "10.0.0.1", 00:11:12.316 "trsvcid": "55572" 00:11:12.316 }, 00:11:12.316 "auth": { 00:11:12.317 "state": "completed", 00:11:12.317 "digest": "sha384", 00:11:12.317 "dhgroup": "ffdhe3072" 00:11:12.317 } 00:11:12.317 } 00:11:12.317 ]' 00:11:12.317 09:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:12.317 09:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:12.317 09:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:12.317 09:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:12.317 09:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:12.575 09:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:12.575 09:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:12.575 09:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:12.575 09:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWRjNjI3ODhkYTMzZGYyZTg3ZTViMGNlNjE0MTdhODdhMDA5YWY2NmM2YzY0NmFiEctxFQ==: --dhchap-ctrl-secret DHHC-1:01:NDJkOTVmODBiNTlmNTFjMjQ4NGRkZWEwODA5NjFiNWT3vTE2: 00:11:12.576 09:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:02:YWRjNjI3ODhkYTMzZGYyZTg3ZTViMGNlNjE0MTdhODdhMDA5YWY2NmM2YzY0NmFiEctxFQ==: --dhchap-ctrl-secret DHHC-1:01:NDJkOTVmODBiNTlmNTFjMjQ4NGRkZWEwODA5NjFiNWT3vTE2: 00:11:13.142 09:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:13.142 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:13.142 09:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:11:13.142 09:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.142 09:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.142 09:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.142 09:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:13.142 09:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:13.142 09:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:13.710 09:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:11:13.710 09:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:13.710 09:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:13.710 09:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:13.710 09:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:13.710 09:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:13.710 09:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key3 00:11:13.710 09:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.710 09:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.710 09:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.710 09:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:13.710 09:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:13.710 09:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:13.969 00:11:13.969 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:13.969 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:13.969 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:13.969 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:13.969 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:13.969 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.969 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.228 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.228 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:14.228 { 00:11:14.228 "cntlid": 71, 00:11:14.228 "qid": 0, 00:11:14.228 "state": "enabled", 00:11:14.228 "thread": "nvmf_tgt_poll_group_000", 00:11:14.228 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:11:14.228 "listen_address": { 00:11:14.228 "trtype": "TCP", 00:11:14.228 "adrfam": "IPv4", 00:11:14.228 "traddr": "10.0.0.3", 00:11:14.228 "trsvcid": "4420" 00:11:14.228 }, 00:11:14.228 "peer_address": { 00:11:14.228 "trtype": "TCP", 00:11:14.228 "adrfam": "IPv4", 00:11:14.228 "traddr": "10.0.0.1", 00:11:14.228 "trsvcid": "41036" 00:11:14.228 }, 00:11:14.228 "auth": { 00:11:14.228 "state": "completed", 00:11:14.228 "digest": "sha384", 00:11:14.228 "dhgroup": "ffdhe3072" 00:11:14.228 } 00:11:14.228 } 00:11:14.228 ]' 00:11:14.228 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:14.228 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:14.228 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:14.228 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:14.228 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:14.228 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:14.228 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:14.228 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:14.487 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGM1ZTk2NmY2YWMyNDcxMDFhOWMxZGZmOTcyNmVmMzJkM2RjMWY2ODU2OTI5NjYxZDYxNDkxMjNiZWFmOGVmMrN9cww=: 00:11:14.487 09:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:03:MGM1ZTk2NmY2YWMyNDcxMDFhOWMxZGZmOTcyNmVmMzJkM2RjMWY2ODU2OTI5NjYxZDYxNDkxMjNiZWFmOGVmMrN9cww=: 00:11:15.055 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:15.055 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:15.055 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:11:15.055 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.055 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.055 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.055 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:15.055 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:15.055 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:15.055 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:15.314 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:11:15.314 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:15.314 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:15.314 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:15.314 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:15.314 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:15.314 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:15.314 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.314 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.314 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.314 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:15.314 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:15.314 09:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:15.911 00:11:15.911 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:15.911 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:15.911 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:16.170 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:16.170 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:16.170 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.170 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.170 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.170 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:16.170 { 00:11:16.170 "cntlid": 73, 00:11:16.170 "qid": 0, 00:11:16.170 "state": "enabled", 00:11:16.170 "thread": "nvmf_tgt_poll_group_000", 00:11:16.170 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:11:16.170 "listen_address": { 00:11:16.170 "trtype": "TCP", 00:11:16.170 "adrfam": "IPv4", 00:11:16.170 "traddr": "10.0.0.3", 00:11:16.170 "trsvcid": "4420" 00:11:16.170 }, 00:11:16.170 "peer_address": { 00:11:16.170 "trtype": "TCP", 00:11:16.170 "adrfam": "IPv4", 00:11:16.170 "traddr": "10.0.0.1", 00:11:16.170 "trsvcid": "41066" 00:11:16.170 }, 00:11:16.170 "auth": { 00:11:16.170 "state": "completed", 00:11:16.170 "digest": "sha384", 00:11:16.170 "dhgroup": "ffdhe4096" 00:11:16.170 } 00:11:16.170 } 00:11:16.170 ]' 00:11:16.170 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:16.170 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:16.170 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:16.170 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:16.170 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:16.170 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:16.170 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:16.170 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:16.429 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGUxYmE4MDI0ODdlYzk1ZjI0OGIzMmY5NzE1MGEzMTZlZGU1NDgxNjA1ODVhNzE3bCjLEA==: --dhchap-ctrl-secret DHHC-1:03:ZjZhY2QxOGQxNTQzY2M4NGMxYTk1MzRiYTUxNjllMTljNWU4YTVkNmVmNDUyODZmMjRjYTUyNTEyMDA1MGYxZkGtyhU=: 00:11:16.429 09:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:00:MGUxYmE4MDI0ODdlYzk1ZjI0OGIzMmY5NzE1MGEzMTZlZGU1NDgxNjA1ODVhNzE3bCjLEA==: --dhchap-ctrl-secret DHHC-1:03:ZjZhY2QxOGQxNTQzY2M4NGMxYTk1MzRiYTUxNjllMTljNWU4YTVkNmVmNDUyODZmMjRjYTUyNTEyMDA1MGYxZkGtyhU=: 00:11:16.996 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:16.996 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:16.996 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:11:16.996 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.996 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.996 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.996 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:16.996 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:16.996 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:17.255 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:11:17.255 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:17.255 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:17.255 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:17.255 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:17.255 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:17.255 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:17.255 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.255 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.255 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.255 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:17.255 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:17.255 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:17.822 00:11:17.822 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:17.822 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:17.822 09:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:18.081 09:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:18.081 09:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:18.081 09:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.081 09:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.081 09:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.081 09:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:18.081 { 00:11:18.081 "cntlid": 75, 00:11:18.081 "qid": 0, 00:11:18.081 "state": "enabled", 00:11:18.081 "thread": "nvmf_tgt_poll_group_000", 00:11:18.081 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:11:18.081 "listen_address": { 00:11:18.081 "trtype": "TCP", 00:11:18.081 "adrfam": "IPv4", 00:11:18.081 "traddr": "10.0.0.3", 00:11:18.081 "trsvcid": "4420" 00:11:18.081 }, 00:11:18.081 "peer_address": { 00:11:18.081 "trtype": "TCP", 00:11:18.081 "adrfam": "IPv4", 00:11:18.081 "traddr": "10.0.0.1", 00:11:18.081 "trsvcid": "41082" 00:11:18.081 }, 00:11:18.081 "auth": { 00:11:18.081 "state": "completed", 00:11:18.081 "digest": "sha384", 00:11:18.081 "dhgroup": "ffdhe4096" 00:11:18.081 } 00:11:18.081 } 00:11:18.081 ]' 00:11:18.081 09:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:18.081 09:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:18.081 09:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:18.081 09:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:18.081 09:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:18.081 09:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:18.081 09:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:18.081 09:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:18.649 09:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWQ0MDhjMDczNDdmMzk4NTYzN2Y1M2IwZjAyNDYyZjCKlYWr: --dhchap-ctrl-secret DHHC-1:02:MzE5OWQ5OGZjYmI1ZGFlMTViN2FhY2E1Nzc4NzU5NThkNGRkYmU3OGM2YjdiZmI5Mk3gaA==: 00:11:18.649 09:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:01:YWQ0MDhjMDczNDdmMzk4NTYzN2Y1M2IwZjAyNDYyZjCKlYWr: --dhchap-ctrl-secret DHHC-1:02:MzE5OWQ5OGZjYmI1ZGFlMTViN2FhY2E1Nzc4NzU5NThkNGRkYmU3OGM2YjdiZmI5Mk3gaA==: 00:11:19.216 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:19.216 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:19.216 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:11:19.216 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.216 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.216 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.216 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:19.216 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:19.216 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:19.475 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:11:19.475 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:19.475 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:19.475 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:19.475 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:19.475 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:19.475 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:19.475 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.475 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.475 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.475 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:19.475 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:19.475 09:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:19.734 00:11:19.734 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:19.734 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:19.734 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:19.993 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:19.993 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:19.993 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.993 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.993 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.993 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:19.993 { 00:11:19.993 "cntlid": 77, 00:11:19.993 "qid": 0, 00:11:19.993 "state": "enabled", 00:11:19.993 "thread": "nvmf_tgt_poll_group_000", 00:11:19.993 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:11:19.993 "listen_address": { 00:11:19.993 "trtype": "TCP", 00:11:19.993 "adrfam": "IPv4", 00:11:19.993 "traddr": "10.0.0.3", 00:11:19.993 "trsvcid": "4420" 00:11:19.993 }, 00:11:19.993 "peer_address": { 00:11:19.993 "trtype": "TCP", 00:11:19.993 "adrfam": "IPv4", 00:11:19.993 "traddr": "10.0.0.1", 00:11:19.993 "trsvcid": "41104" 00:11:19.993 }, 00:11:19.993 "auth": { 00:11:19.993 "state": "completed", 00:11:19.993 "digest": "sha384", 00:11:19.993 "dhgroup": "ffdhe4096" 00:11:19.993 } 00:11:19.993 } 00:11:19.993 ]' 00:11:19.993 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:19.993 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:19.993 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:20.252 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:20.252 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:20.252 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:20.252 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:20.252 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:20.511 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWRjNjI3ODhkYTMzZGYyZTg3ZTViMGNlNjE0MTdhODdhMDA5YWY2NmM2YzY0NmFiEctxFQ==: --dhchap-ctrl-secret DHHC-1:01:NDJkOTVmODBiNTlmNTFjMjQ4NGRkZWEwODA5NjFiNWT3vTE2: 00:11:20.511 09:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:02:YWRjNjI3ODhkYTMzZGYyZTg3ZTViMGNlNjE0MTdhODdhMDA5YWY2NmM2YzY0NmFiEctxFQ==: --dhchap-ctrl-secret DHHC-1:01:NDJkOTVmODBiNTlmNTFjMjQ4NGRkZWEwODA5NjFiNWT3vTE2: 00:11:21.078 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:21.078 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:21.078 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:11:21.078 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.078 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.078 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.078 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:21.078 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:21.078 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:21.337 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:11:21.337 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:21.337 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:21.337 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:21.337 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:21.337 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:21.337 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key3 00:11:21.337 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.337 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.337 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.337 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:21.337 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:21.337 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:21.596 00:11:21.596 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:21.596 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:21.596 09:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:21.855 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:21.855 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:21.855 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.855 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.855 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.855 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:21.855 { 00:11:21.855 "cntlid": 79, 00:11:21.855 "qid": 0, 00:11:21.855 "state": "enabled", 00:11:21.855 "thread": "nvmf_tgt_poll_group_000", 00:11:21.855 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:11:21.855 "listen_address": { 00:11:21.855 "trtype": "TCP", 00:11:21.855 "adrfam": "IPv4", 00:11:21.855 "traddr": "10.0.0.3", 00:11:21.855 "trsvcid": "4420" 00:11:21.855 }, 00:11:21.855 "peer_address": { 00:11:21.855 "trtype": "TCP", 00:11:21.855 "adrfam": "IPv4", 00:11:21.855 "traddr": "10.0.0.1", 00:11:21.855 "trsvcid": "41122" 00:11:21.855 }, 00:11:21.855 "auth": { 00:11:21.855 "state": "completed", 00:11:21.855 "digest": "sha384", 00:11:21.855 "dhgroup": "ffdhe4096" 00:11:21.855 } 00:11:21.855 } 00:11:21.855 ]' 00:11:21.855 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:21.855 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:21.855 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:22.114 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:22.114 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:22.114 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:22.114 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:22.114 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:22.373 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGM1ZTk2NmY2YWMyNDcxMDFhOWMxZGZmOTcyNmVmMzJkM2RjMWY2ODU2OTI5NjYxZDYxNDkxMjNiZWFmOGVmMrN9cww=: 00:11:22.373 09:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:03:MGM1ZTk2NmY2YWMyNDcxMDFhOWMxZGZmOTcyNmVmMzJkM2RjMWY2ODU2OTI5NjYxZDYxNDkxMjNiZWFmOGVmMrN9cww=: 00:11:22.940 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:22.940 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:22.940 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:11:22.940 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.940 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.940 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.940 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:22.940 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:22.940 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:22.940 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:23.199 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:11:23.199 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:23.199 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:23.199 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:23.199 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:23.199 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:23.199 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:23.199 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.199 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.199 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.199 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:23.199 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:23.199 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:23.458 00:11:23.458 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:23.458 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:23.458 09:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:23.717 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:23.717 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:23.717 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.717 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.717 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.717 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:23.717 { 00:11:23.717 "cntlid": 81, 00:11:23.717 "qid": 0, 00:11:23.717 "state": "enabled", 00:11:23.717 "thread": "nvmf_tgt_poll_group_000", 00:11:23.717 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:11:23.717 "listen_address": { 00:11:23.717 "trtype": "TCP", 00:11:23.717 "adrfam": "IPv4", 00:11:23.717 "traddr": "10.0.0.3", 00:11:23.717 "trsvcid": "4420" 00:11:23.717 }, 00:11:23.717 "peer_address": { 00:11:23.717 "trtype": "TCP", 00:11:23.717 "adrfam": "IPv4", 00:11:23.717 "traddr": "10.0.0.1", 00:11:23.717 "trsvcid": "41142" 00:11:23.717 }, 00:11:23.717 "auth": { 00:11:23.717 "state": "completed", 00:11:23.717 "digest": "sha384", 00:11:23.717 "dhgroup": "ffdhe6144" 00:11:23.717 } 00:11:23.717 } 00:11:23.717 ]' 00:11:23.717 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:23.976 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:23.976 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:23.976 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:23.976 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:23.976 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:23.976 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:23.976 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:24.235 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGUxYmE4MDI0ODdlYzk1ZjI0OGIzMmY5NzE1MGEzMTZlZGU1NDgxNjA1ODVhNzE3bCjLEA==: --dhchap-ctrl-secret DHHC-1:03:ZjZhY2QxOGQxNTQzY2M4NGMxYTk1MzRiYTUxNjllMTljNWU4YTVkNmVmNDUyODZmMjRjYTUyNTEyMDA1MGYxZkGtyhU=: 00:11:24.235 09:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:00:MGUxYmE4MDI0ODdlYzk1ZjI0OGIzMmY5NzE1MGEzMTZlZGU1NDgxNjA1ODVhNzE3bCjLEA==: --dhchap-ctrl-secret DHHC-1:03:ZjZhY2QxOGQxNTQzY2M4NGMxYTk1MzRiYTUxNjllMTljNWU4YTVkNmVmNDUyODZmMjRjYTUyNTEyMDA1MGYxZkGtyhU=: 00:11:24.803 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:24.803 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:24.803 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:11:24.803 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.803 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.803 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.803 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:24.803 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:24.803 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:25.062 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:11:25.062 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:25.062 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:25.062 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:25.062 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:25.062 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:25.062 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:25.062 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.062 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.062 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.062 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:25.062 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:25.062 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:25.320 00:11:25.320 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:25.320 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:25.320 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:25.888 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:25.888 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:25.888 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.888 09:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.888 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.888 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:25.888 { 00:11:25.888 "cntlid": 83, 00:11:25.888 "qid": 0, 00:11:25.888 "state": "enabled", 00:11:25.888 "thread": "nvmf_tgt_poll_group_000", 00:11:25.888 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:11:25.888 "listen_address": { 00:11:25.888 "trtype": "TCP", 00:11:25.888 "adrfam": "IPv4", 00:11:25.888 "traddr": "10.0.0.3", 00:11:25.888 "trsvcid": "4420" 00:11:25.888 }, 00:11:25.888 "peer_address": { 00:11:25.888 "trtype": "TCP", 00:11:25.888 "adrfam": "IPv4", 00:11:25.888 "traddr": "10.0.0.1", 00:11:25.888 "trsvcid": "35514" 00:11:25.888 }, 00:11:25.888 "auth": { 00:11:25.888 "state": "completed", 00:11:25.888 "digest": "sha384", 00:11:25.888 "dhgroup": "ffdhe6144" 00:11:25.888 } 00:11:25.888 } 00:11:25.888 ]' 00:11:25.888 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:25.888 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:25.889 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:25.889 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:25.889 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:25.889 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:25.889 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:25.889 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:26.147 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWQ0MDhjMDczNDdmMzk4NTYzN2Y1M2IwZjAyNDYyZjCKlYWr: --dhchap-ctrl-secret DHHC-1:02:MzE5OWQ5OGZjYmI1ZGFlMTViN2FhY2E1Nzc4NzU5NThkNGRkYmU3OGM2YjdiZmI5Mk3gaA==: 00:11:26.147 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:01:YWQ0MDhjMDczNDdmMzk4NTYzN2Y1M2IwZjAyNDYyZjCKlYWr: --dhchap-ctrl-secret DHHC-1:02:MzE5OWQ5OGZjYmI1ZGFlMTViN2FhY2E1Nzc4NzU5NThkNGRkYmU3OGM2YjdiZmI5Mk3gaA==: 00:11:26.715 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:26.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:26.715 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:11:26.715 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.715 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.715 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.715 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:26.715 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:26.715 09:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:26.975 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:11:26.975 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:26.975 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:26.975 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:26.975 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:26.975 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:26.975 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:26.975 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.975 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.975 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.975 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:26.975 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:26.975 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:27.543 00:11:27.543 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:27.543 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:27.543 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:27.816 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:27.816 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:27.816 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.816 09:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.816 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.816 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:27.816 { 00:11:27.816 "cntlid": 85, 00:11:27.816 "qid": 0, 00:11:27.816 "state": "enabled", 00:11:27.816 "thread": "nvmf_tgt_poll_group_000", 00:11:27.816 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:11:27.816 "listen_address": { 00:11:27.816 "trtype": "TCP", 00:11:27.816 "adrfam": "IPv4", 00:11:27.816 "traddr": "10.0.0.3", 00:11:27.816 "trsvcid": "4420" 00:11:27.816 }, 00:11:27.816 "peer_address": { 00:11:27.816 "trtype": "TCP", 00:11:27.816 "adrfam": "IPv4", 00:11:27.816 "traddr": "10.0.0.1", 00:11:27.816 "trsvcid": "35542" 00:11:27.816 }, 00:11:27.816 "auth": { 00:11:27.816 "state": "completed", 00:11:27.816 "digest": "sha384", 00:11:27.816 "dhgroup": "ffdhe6144" 00:11:27.816 } 00:11:27.816 } 00:11:27.816 ]' 00:11:27.816 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:27.816 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:27.816 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:27.816 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:27.816 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:27.816 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:27.816 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:27.816 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:28.091 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWRjNjI3ODhkYTMzZGYyZTg3ZTViMGNlNjE0MTdhODdhMDA5YWY2NmM2YzY0NmFiEctxFQ==: --dhchap-ctrl-secret DHHC-1:01:NDJkOTVmODBiNTlmNTFjMjQ4NGRkZWEwODA5NjFiNWT3vTE2: 00:11:28.091 09:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:02:YWRjNjI3ODhkYTMzZGYyZTg3ZTViMGNlNjE0MTdhODdhMDA5YWY2NmM2YzY0NmFiEctxFQ==: --dhchap-ctrl-secret DHHC-1:01:NDJkOTVmODBiNTlmNTFjMjQ4NGRkZWEwODA5NjFiNWT3vTE2: 00:11:29.027 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:29.027 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:29.027 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:11:29.027 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.027 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.027 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.027 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:29.027 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:29.027 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:29.027 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:11:29.027 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:29.027 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:29.027 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:29.027 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:29.027 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:29.027 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key3 00:11:29.027 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.027 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.027 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.027 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:29.028 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:29.028 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:29.595 00:11:29.595 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:29.595 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:29.595 09:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:29.854 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:29.854 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:29.854 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.854 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.854 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.854 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:29.854 { 00:11:29.854 "cntlid": 87, 00:11:29.854 "qid": 0, 00:11:29.854 "state": "enabled", 00:11:29.854 "thread": "nvmf_tgt_poll_group_000", 00:11:29.854 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:11:29.854 "listen_address": { 00:11:29.854 "trtype": "TCP", 00:11:29.854 "adrfam": "IPv4", 00:11:29.854 "traddr": "10.0.0.3", 00:11:29.854 "trsvcid": "4420" 00:11:29.854 }, 00:11:29.854 "peer_address": { 00:11:29.854 "trtype": "TCP", 00:11:29.855 "adrfam": "IPv4", 00:11:29.855 "traddr": "10.0.0.1", 00:11:29.855 "trsvcid": "35552" 00:11:29.855 }, 00:11:29.855 "auth": { 00:11:29.855 "state": "completed", 00:11:29.855 "digest": "sha384", 00:11:29.855 "dhgroup": "ffdhe6144" 00:11:29.855 } 00:11:29.855 } 00:11:29.855 ]' 00:11:29.855 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:29.855 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:29.855 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:29.855 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:29.855 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:29.855 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:29.855 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:29.855 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:30.114 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGM1ZTk2NmY2YWMyNDcxMDFhOWMxZGZmOTcyNmVmMzJkM2RjMWY2ODU2OTI5NjYxZDYxNDkxMjNiZWFmOGVmMrN9cww=: 00:11:30.114 09:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:03:MGM1ZTk2NmY2YWMyNDcxMDFhOWMxZGZmOTcyNmVmMzJkM2RjMWY2ODU2OTI5NjYxZDYxNDkxMjNiZWFmOGVmMrN9cww=: 00:11:30.682 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:30.682 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:30.682 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:11:30.682 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.682 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.682 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.682 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:30.682 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:30.682 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:30.682 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:30.941 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:11:30.941 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:30.941 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:30.941 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:30.941 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:30.941 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:30.941 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:30.941 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.941 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.941 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.941 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:30.941 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:30.941 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:31.508 00:11:31.508 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:31.508 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:31.508 09:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:32.075 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:32.075 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:32.075 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.075 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.075 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.075 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:32.075 { 00:11:32.075 "cntlid": 89, 00:11:32.075 "qid": 0, 00:11:32.075 "state": "enabled", 00:11:32.075 "thread": "nvmf_tgt_poll_group_000", 00:11:32.075 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:11:32.075 "listen_address": { 00:11:32.075 "trtype": "TCP", 00:11:32.075 "adrfam": "IPv4", 00:11:32.076 "traddr": "10.0.0.3", 00:11:32.076 "trsvcid": "4420" 00:11:32.076 }, 00:11:32.076 "peer_address": { 00:11:32.076 "trtype": "TCP", 00:11:32.076 "adrfam": "IPv4", 00:11:32.076 "traddr": "10.0.0.1", 00:11:32.076 "trsvcid": "35582" 00:11:32.076 }, 00:11:32.076 "auth": { 00:11:32.076 "state": "completed", 00:11:32.076 "digest": "sha384", 00:11:32.076 "dhgroup": "ffdhe8192" 00:11:32.076 } 00:11:32.076 } 00:11:32.076 ]' 00:11:32.076 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:32.076 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:32.076 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:32.076 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:32.076 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:32.076 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:32.076 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:32.076 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:32.335 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGUxYmE4MDI0ODdlYzk1ZjI0OGIzMmY5NzE1MGEzMTZlZGU1NDgxNjA1ODVhNzE3bCjLEA==: --dhchap-ctrl-secret DHHC-1:03:ZjZhY2QxOGQxNTQzY2M4NGMxYTk1MzRiYTUxNjllMTljNWU4YTVkNmVmNDUyODZmMjRjYTUyNTEyMDA1MGYxZkGtyhU=: 00:11:32.335 09:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:00:MGUxYmE4MDI0ODdlYzk1ZjI0OGIzMmY5NzE1MGEzMTZlZGU1NDgxNjA1ODVhNzE3bCjLEA==: --dhchap-ctrl-secret DHHC-1:03:ZjZhY2QxOGQxNTQzY2M4NGMxYTk1MzRiYTUxNjllMTljNWU4YTVkNmVmNDUyODZmMjRjYTUyNTEyMDA1MGYxZkGtyhU=: 00:11:32.902 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:32.902 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:32.902 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:11:32.902 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.902 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.902 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.902 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:32.902 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:32.902 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:33.161 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:11:33.161 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:33.161 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:33.161 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:33.161 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:33.161 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:33.161 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:33.161 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.161 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.161 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.161 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:33.161 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:33.161 09:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:33.727 00:11:33.727 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:33.727 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:33.727 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:33.986 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:33.986 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:33.986 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.986 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.986 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.986 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:33.986 { 00:11:33.986 "cntlid": 91, 00:11:33.986 "qid": 0, 00:11:33.986 "state": "enabled", 00:11:33.986 "thread": "nvmf_tgt_poll_group_000", 00:11:33.986 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:11:33.986 "listen_address": { 00:11:33.986 "trtype": "TCP", 00:11:33.986 "adrfam": "IPv4", 00:11:33.986 "traddr": "10.0.0.3", 00:11:33.986 "trsvcid": "4420" 00:11:33.986 }, 00:11:33.986 "peer_address": { 00:11:33.986 "trtype": "TCP", 00:11:33.986 "adrfam": "IPv4", 00:11:33.986 "traddr": "10.0.0.1", 00:11:33.986 "trsvcid": "35616" 00:11:33.986 }, 00:11:33.986 "auth": { 00:11:33.986 "state": "completed", 00:11:33.986 "digest": "sha384", 00:11:33.986 "dhgroup": "ffdhe8192" 00:11:33.986 } 00:11:33.986 } 00:11:33.986 ]' 00:11:33.986 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:33.986 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:33.986 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:33.986 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:33.986 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:34.244 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:34.244 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:34.244 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:34.503 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWQ0MDhjMDczNDdmMzk4NTYzN2Y1M2IwZjAyNDYyZjCKlYWr: --dhchap-ctrl-secret DHHC-1:02:MzE5OWQ5OGZjYmI1ZGFlMTViN2FhY2E1Nzc4NzU5NThkNGRkYmU3OGM2YjdiZmI5Mk3gaA==: 00:11:34.504 09:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:01:YWQ0MDhjMDczNDdmMzk4NTYzN2Y1M2IwZjAyNDYyZjCKlYWr: --dhchap-ctrl-secret DHHC-1:02:MzE5OWQ5OGZjYmI1ZGFlMTViN2FhY2E1Nzc4NzU5NThkNGRkYmU3OGM2YjdiZmI5Mk3gaA==: 00:11:35.099 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:35.099 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:35.099 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:11:35.099 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.099 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.099 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.099 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:35.099 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:35.099 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:35.358 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:11:35.358 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:35.358 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:35.358 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:35.358 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:35.358 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:35.358 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:35.358 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.358 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.358 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.358 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:35.358 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:35.358 09:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:35.925 00:11:35.925 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:35.925 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:35.925 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:36.183 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:36.183 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:36.183 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.183 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.183 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.183 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:36.183 { 00:11:36.183 "cntlid": 93, 00:11:36.183 "qid": 0, 00:11:36.184 "state": "enabled", 00:11:36.184 "thread": "nvmf_tgt_poll_group_000", 00:11:36.184 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:11:36.184 "listen_address": { 00:11:36.184 "trtype": "TCP", 00:11:36.184 "adrfam": "IPv4", 00:11:36.184 "traddr": "10.0.0.3", 00:11:36.184 "trsvcid": "4420" 00:11:36.184 }, 00:11:36.184 "peer_address": { 00:11:36.184 "trtype": "TCP", 00:11:36.184 "adrfam": "IPv4", 00:11:36.184 "traddr": "10.0.0.1", 00:11:36.184 "trsvcid": "35820" 00:11:36.184 }, 00:11:36.184 "auth": { 00:11:36.184 "state": "completed", 00:11:36.184 "digest": "sha384", 00:11:36.184 "dhgroup": "ffdhe8192" 00:11:36.184 } 00:11:36.184 } 00:11:36.184 ]' 00:11:36.184 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:36.184 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:36.184 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:36.184 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:36.184 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:36.443 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:36.443 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:36.443 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:36.443 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWRjNjI3ODhkYTMzZGYyZTg3ZTViMGNlNjE0MTdhODdhMDA5YWY2NmM2YzY0NmFiEctxFQ==: --dhchap-ctrl-secret DHHC-1:01:NDJkOTVmODBiNTlmNTFjMjQ4NGRkZWEwODA5NjFiNWT3vTE2: 00:11:36.443 09:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:02:YWRjNjI3ODhkYTMzZGYyZTg3ZTViMGNlNjE0MTdhODdhMDA5YWY2NmM2YzY0NmFiEctxFQ==: --dhchap-ctrl-secret DHHC-1:01:NDJkOTVmODBiNTlmNTFjMjQ4NGRkZWEwODA5NjFiNWT3vTE2: 00:11:37.011 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:37.011 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:37.011 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:11:37.011 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.011 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.011 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.011 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:37.011 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:37.011 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:37.579 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:11:37.579 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:37.579 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:37.579 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:37.579 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:37.579 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:37.579 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key3 00:11:37.579 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.579 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.579 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.579 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:37.579 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:37.579 09:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:37.837 00:11:37.837 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:37.837 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:37.837 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:38.405 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:38.405 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:38.405 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.405 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.405 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.405 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:38.405 { 00:11:38.405 "cntlid": 95, 00:11:38.405 "qid": 0, 00:11:38.405 "state": "enabled", 00:11:38.405 "thread": "nvmf_tgt_poll_group_000", 00:11:38.405 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:11:38.405 "listen_address": { 00:11:38.405 "trtype": "TCP", 00:11:38.405 "adrfam": "IPv4", 00:11:38.405 "traddr": "10.0.0.3", 00:11:38.405 "trsvcid": "4420" 00:11:38.405 }, 00:11:38.405 "peer_address": { 00:11:38.405 "trtype": "TCP", 00:11:38.405 "adrfam": "IPv4", 00:11:38.405 "traddr": "10.0.0.1", 00:11:38.405 "trsvcid": "35846" 00:11:38.405 }, 00:11:38.405 "auth": { 00:11:38.405 "state": "completed", 00:11:38.405 "digest": "sha384", 00:11:38.405 "dhgroup": "ffdhe8192" 00:11:38.405 } 00:11:38.405 } 00:11:38.405 ]' 00:11:38.405 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:38.405 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:38.405 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:38.405 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:38.405 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:38.405 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:38.405 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:38.405 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:38.664 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGM1ZTk2NmY2YWMyNDcxMDFhOWMxZGZmOTcyNmVmMzJkM2RjMWY2ODU2OTI5NjYxZDYxNDkxMjNiZWFmOGVmMrN9cww=: 00:11:38.664 09:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:03:MGM1ZTk2NmY2YWMyNDcxMDFhOWMxZGZmOTcyNmVmMzJkM2RjMWY2ODU2OTI5NjYxZDYxNDkxMjNiZWFmOGVmMrN9cww=: 00:11:39.232 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:39.232 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:39.232 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:11:39.232 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.232 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.232 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.232 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:11:39.232 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:39.232 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:39.232 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:39.232 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:39.492 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:11:39.492 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:39.492 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:39.492 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:39.492 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:39.492 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:39.492 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:39.492 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.492 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.492 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.492 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:39.492 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:39.492 09:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:39.750 00:11:39.750 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:39.750 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:39.751 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:40.009 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:40.009 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:40.009 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.009 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.009 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.009 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:40.009 { 00:11:40.009 "cntlid": 97, 00:11:40.009 "qid": 0, 00:11:40.009 "state": "enabled", 00:11:40.009 "thread": "nvmf_tgt_poll_group_000", 00:11:40.009 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:11:40.009 "listen_address": { 00:11:40.009 "trtype": "TCP", 00:11:40.009 "adrfam": "IPv4", 00:11:40.009 "traddr": "10.0.0.3", 00:11:40.009 "trsvcid": "4420" 00:11:40.009 }, 00:11:40.009 "peer_address": { 00:11:40.009 "trtype": "TCP", 00:11:40.009 "adrfam": "IPv4", 00:11:40.009 "traddr": "10.0.0.1", 00:11:40.009 "trsvcid": "35864" 00:11:40.009 }, 00:11:40.009 "auth": { 00:11:40.009 "state": "completed", 00:11:40.009 "digest": "sha512", 00:11:40.009 "dhgroup": "null" 00:11:40.009 } 00:11:40.009 } 00:11:40.009 ]' 00:11:40.009 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:40.268 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:40.268 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:40.268 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:40.268 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:40.268 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:40.268 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:40.268 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:40.527 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGUxYmE4MDI0ODdlYzk1ZjI0OGIzMmY5NzE1MGEzMTZlZGU1NDgxNjA1ODVhNzE3bCjLEA==: --dhchap-ctrl-secret DHHC-1:03:ZjZhY2QxOGQxNTQzY2M4NGMxYTk1MzRiYTUxNjllMTljNWU4YTVkNmVmNDUyODZmMjRjYTUyNTEyMDA1MGYxZkGtyhU=: 00:11:40.527 09:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:00:MGUxYmE4MDI0ODdlYzk1ZjI0OGIzMmY5NzE1MGEzMTZlZGU1NDgxNjA1ODVhNzE3bCjLEA==: --dhchap-ctrl-secret DHHC-1:03:ZjZhY2QxOGQxNTQzY2M4NGMxYTk1MzRiYTUxNjllMTljNWU4YTVkNmVmNDUyODZmMjRjYTUyNTEyMDA1MGYxZkGtyhU=: 00:11:41.095 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:41.095 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:41.095 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:11:41.095 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.095 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.095 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.095 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:41.095 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:41.095 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:41.353 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:11:41.353 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:41.353 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:41.353 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:41.353 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:41.353 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:41.353 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:41.353 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.353 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.353 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.353 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:41.353 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:41.353 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:41.612 00:11:41.612 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:41.612 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:41.612 09:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:41.870 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:41.870 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:41.870 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.870 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.870 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.870 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:41.870 { 00:11:41.870 "cntlid": 99, 00:11:41.870 "qid": 0, 00:11:41.870 "state": "enabled", 00:11:41.870 "thread": "nvmf_tgt_poll_group_000", 00:11:41.870 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:11:41.870 "listen_address": { 00:11:41.870 "trtype": "TCP", 00:11:41.870 "adrfam": "IPv4", 00:11:41.870 "traddr": "10.0.0.3", 00:11:41.870 "trsvcid": "4420" 00:11:41.870 }, 00:11:41.870 "peer_address": { 00:11:41.870 "trtype": "TCP", 00:11:41.870 "adrfam": "IPv4", 00:11:41.870 "traddr": "10.0.0.1", 00:11:41.870 "trsvcid": "35906" 00:11:41.870 }, 00:11:41.870 "auth": { 00:11:41.870 "state": "completed", 00:11:41.870 "digest": "sha512", 00:11:41.870 "dhgroup": "null" 00:11:41.870 } 00:11:41.870 } 00:11:41.870 ]' 00:11:41.870 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:41.870 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:41.870 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:41.870 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:41.870 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:42.128 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:42.128 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:42.128 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:42.128 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWQ0MDhjMDczNDdmMzk4NTYzN2Y1M2IwZjAyNDYyZjCKlYWr: --dhchap-ctrl-secret DHHC-1:02:MzE5OWQ5OGZjYmI1ZGFlMTViN2FhY2E1Nzc4NzU5NThkNGRkYmU3OGM2YjdiZmI5Mk3gaA==: 00:11:42.128 09:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:01:YWQ0MDhjMDczNDdmMzk4NTYzN2Y1M2IwZjAyNDYyZjCKlYWr: --dhchap-ctrl-secret DHHC-1:02:MzE5OWQ5OGZjYmI1ZGFlMTViN2FhY2E1Nzc4NzU5NThkNGRkYmU3OGM2YjdiZmI5Mk3gaA==: 00:11:42.695 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:42.695 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:42.695 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:11:42.695 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.695 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.695 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.695 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:42.695 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:42.695 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:42.954 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:11:42.954 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:42.954 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:42.954 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:42.954 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:42.954 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:42.954 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:42.954 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.954 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.954 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.954 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:42.954 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:42.954 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:43.213 00:11:43.213 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:43.213 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:43.213 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:43.471 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:43.471 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:43.471 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.471 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.745 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.745 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:43.745 { 00:11:43.745 "cntlid": 101, 00:11:43.745 "qid": 0, 00:11:43.745 "state": "enabled", 00:11:43.745 "thread": "nvmf_tgt_poll_group_000", 00:11:43.745 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:11:43.745 "listen_address": { 00:11:43.745 "trtype": "TCP", 00:11:43.745 "adrfam": "IPv4", 00:11:43.745 "traddr": "10.0.0.3", 00:11:43.745 "trsvcid": "4420" 00:11:43.745 }, 00:11:43.745 "peer_address": { 00:11:43.745 "trtype": "TCP", 00:11:43.745 "adrfam": "IPv4", 00:11:43.745 "traddr": "10.0.0.1", 00:11:43.745 "trsvcid": "35940" 00:11:43.745 }, 00:11:43.745 "auth": { 00:11:43.745 "state": "completed", 00:11:43.745 "digest": "sha512", 00:11:43.745 "dhgroup": "null" 00:11:43.745 } 00:11:43.745 } 00:11:43.745 ]' 00:11:43.745 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:43.745 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:43.745 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:43.745 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:43.745 09:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:43.745 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:43.745 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:43.745 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:44.012 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWRjNjI3ODhkYTMzZGYyZTg3ZTViMGNlNjE0MTdhODdhMDA5YWY2NmM2YzY0NmFiEctxFQ==: --dhchap-ctrl-secret DHHC-1:01:NDJkOTVmODBiNTlmNTFjMjQ4NGRkZWEwODA5NjFiNWT3vTE2: 00:11:44.012 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:02:YWRjNjI3ODhkYTMzZGYyZTg3ZTViMGNlNjE0MTdhODdhMDA5YWY2NmM2YzY0NmFiEctxFQ==: --dhchap-ctrl-secret DHHC-1:01:NDJkOTVmODBiNTlmNTFjMjQ4NGRkZWEwODA5NjFiNWT3vTE2: 00:11:44.579 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:44.579 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:44.579 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:11:44.579 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.579 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.579 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.579 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:44.579 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:44.579 09:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:44.838 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:11:44.838 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:44.838 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:44.838 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:44.838 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:44.838 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:44.838 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key3 00:11:44.838 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.839 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.839 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.839 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:44.839 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:44.839 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:45.098 00:11:45.098 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:45.098 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:45.098 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:45.357 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:45.357 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:45.357 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.357 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.357 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.357 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:45.357 { 00:11:45.357 "cntlid": 103, 00:11:45.357 "qid": 0, 00:11:45.357 "state": "enabled", 00:11:45.357 "thread": "nvmf_tgt_poll_group_000", 00:11:45.357 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:11:45.357 "listen_address": { 00:11:45.357 "trtype": "TCP", 00:11:45.357 "adrfam": "IPv4", 00:11:45.357 "traddr": "10.0.0.3", 00:11:45.357 "trsvcid": "4420" 00:11:45.357 }, 00:11:45.357 "peer_address": { 00:11:45.357 "trtype": "TCP", 00:11:45.357 "adrfam": "IPv4", 00:11:45.357 "traddr": "10.0.0.1", 00:11:45.357 "trsvcid": "40368" 00:11:45.357 }, 00:11:45.357 "auth": { 00:11:45.357 "state": "completed", 00:11:45.357 "digest": "sha512", 00:11:45.357 "dhgroup": "null" 00:11:45.357 } 00:11:45.357 } 00:11:45.357 ]' 00:11:45.357 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:45.357 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:45.357 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:45.357 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:45.357 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:45.616 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:45.616 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:45.616 09:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:45.874 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGM1ZTk2NmY2YWMyNDcxMDFhOWMxZGZmOTcyNmVmMzJkM2RjMWY2ODU2OTI5NjYxZDYxNDkxMjNiZWFmOGVmMrN9cww=: 00:11:45.874 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:03:MGM1ZTk2NmY2YWMyNDcxMDFhOWMxZGZmOTcyNmVmMzJkM2RjMWY2ODU2OTI5NjYxZDYxNDkxMjNiZWFmOGVmMrN9cww=: 00:11:46.442 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:46.442 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:46.443 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:11:46.443 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.443 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.443 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.443 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:46.443 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:46.443 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:46.443 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:46.701 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:11:46.701 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:46.701 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:46.701 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:46.701 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:46.701 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:46.701 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:46.701 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.701 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.701 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.701 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:46.701 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:46.701 09:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:46.960 00:11:46.960 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:46.960 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:46.960 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:47.219 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:47.219 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:47.219 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.219 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.219 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.219 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:47.219 { 00:11:47.219 "cntlid": 105, 00:11:47.219 "qid": 0, 00:11:47.219 "state": "enabled", 00:11:47.219 "thread": "nvmf_tgt_poll_group_000", 00:11:47.219 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:11:47.219 "listen_address": { 00:11:47.219 "trtype": "TCP", 00:11:47.219 "adrfam": "IPv4", 00:11:47.219 "traddr": "10.0.0.3", 00:11:47.219 "trsvcid": "4420" 00:11:47.219 }, 00:11:47.219 "peer_address": { 00:11:47.219 "trtype": "TCP", 00:11:47.219 "adrfam": "IPv4", 00:11:47.219 "traddr": "10.0.0.1", 00:11:47.219 "trsvcid": "40408" 00:11:47.219 }, 00:11:47.219 "auth": { 00:11:47.219 "state": "completed", 00:11:47.219 "digest": "sha512", 00:11:47.219 "dhgroup": "ffdhe2048" 00:11:47.219 } 00:11:47.219 } 00:11:47.219 ]' 00:11:47.219 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:47.219 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:47.219 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:47.219 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:47.219 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:47.477 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:47.477 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:47.477 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:47.736 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGUxYmE4MDI0ODdlYzk1ZjI0OGIzMmY5NzE1MGEzMTZlZGU1NDgxNjA1ODVhNzE3bCjLEA==: --dhchap-ctrl-secret DHHC-1:03:ZjZhY2QxOGQxNTQzY2M4NGMxYTk1MzRiYTUxNjllMTljNWU4YTVkNmVmNDUyODZmMjRjYTUyNTEyMDA1MGYxZkGtyhU=: 00:11:47.736 09:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:00:MGUxYmE4MDI0ODdlYzk1ZjI0OGIzMmY5NzE1MGEzMTZlZGU1NDgxNjA1ODVhNzE3bCjLEA==: --dhchap-ctrl-secret DHHC-1:03:ZjZhY2QxOGQxNTQzY2M4NGMxYTk1MzRiYTUxNjllMTljNWU4YTVkNmVmNDUyODZmMjRjYTUyNTEyMDA1MGYxZkGtyhU=: 00:11:48.304 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:48.304 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:48.304 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:11:48.304 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.304 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.304 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.304 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:48.304 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:48.304 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:48.563 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:11:48.563 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:48.563 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:48.563 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:48.563 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:48.563 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:48.563 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:48.563 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.563 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.563 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.563 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:48.563 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:48.563 09:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:48.822 00:11:48.822 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:48.822 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:48.822 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:49.081 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:49.081 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:49.081 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.081 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.081 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.081 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:49.081 { 00:11:49.081 "cntlid": 107, 00:11:49.081 "qid": 0, 00:11:49.081 "state": "enabled", 00:11:49.081 "thread": "nvmf_tgt_poll_group_000", 00:11:49.081 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:11:49.081 "listen_address": { 00:11:49.081 "trtype": "TCP", 00:11:49.081 "adrfam": "IPv4", 00:11:49.081 "traddr": "10.0.0.3", 00:11:49.081 "trsvcid": "4420" 00:11:49.081 }, 00:11:49.081 "peer_address": { 00:11:49.081 "trtype": "TCP", 00:11:49.081 "adrfam": "IPv4", 00:11:49.081 "traddr": "10.0.0.1", 00:11:49.081 "trsvcid": "40424" 00:11:49.081 }, 00:11:49.081 "auth": { 00:11:49.081 "state": "completed", 00:11:49.081 "digest": "sha512", 00:11:49.081 "dhgroup": "ffdhe2048" 00:11:49.081 } 00:11:49.081 } 00:11:49.081 ]' 00:11:49.081 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:49.081 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:49.081 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:49.340 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:49.340 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:49.340 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:49.340 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:49.340 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:49.600 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWQ0MDhjMDczNDdmMzk4NTYzN2Y1M2IwZjAyNDYyZjCKlYWr: --dhchap-ctrl-secret DHHC-1:02:MzE5OWQ5OGZjYmI1ZGFlMTViN2FhY2E1Nzc4NzU5NThkNGRkYmU3OGM2YjdiZmI5Mk3gaA==: 00:11:49.600 09:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:01:YWQ0MDhjMDczNDdmMzk4NTYzN2Y1M2IwZjAyNDYyZjCKlYWr: --dhchap-ctrl-secret DHHC-1:02:MzE5OWQ5OGZjYmI1ZGFlMTViN2FhY2E1Nzc4NzU5NThkNGRkYmU3OGM2YjdiZmI5Mk3gaA==: 00:11:50.168 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:50.168 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:50.168 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:11:50.168 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.168 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.168 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.168 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:50.168 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:50.168 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:50.427 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:11:50.427 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:50.427 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:50.427 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:50.427 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:50.427 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:50.427 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:50.427 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.427 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.427 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.427 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:50.427 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:50.427 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:50.686 00:11:50.686 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:50.686 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:50.686 09:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:50.945 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:50.945 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:50.945 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.945 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.945 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.945 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:50.945 { 00:11:50.945 "cntlid": 109, 00:11:50.945 "qid": 0, 00:11:50.945 "state": "enabled", 00:11:50.945 "thread": "nvmf_tgt_poll_group_000", 00:11:50.945 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:11:50.945 "listen_address": { 00:11:50.945 "trtype": "TCP", 00:11:50.945 "adrfam": "IPv4", 00:11:50.945 "traddr": "10.0.0.3", 00:11:50.945 "trsvcid": "4420" 00:11:50.945 }, 00:11:50.945 "peer_address": { 00:11:50.945 "trtype": "TCP", 00:11:50.945 "adrfam": "IPv4", 00:11:50.945 "traddr": "10.0.0.1", 00:11:50.945 "trsvcid": "40448" 00:11:50.945 }, 00:11:50.945 "auth": { 00:11:50.945 "state": "completed", 00:11:50.945 "digest": "sha512", 00:11:50.945 "dhgroup": "ffdhe2048" 00:11:50.945 } 00:11:50.945 } 00:11:50.945 ]' 00:11:50.945 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:50.945 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:50.945 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:50.945 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:50.945 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:51.204 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:51.204 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:51.204 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:51.204 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWRjNjI3ODhkYTMzZGYyZTg3ZTViMGNlNjE0MTdhODdhMDA5YWY2NmM2YzY0NmFiEctxFQ==: --dhchap-ctrl-secret DHHC-1:01:NDJkOTVmODBiNTlmNTFjMjQ4NGRkZWEwODA5NjFiNWT3vTE2: 00:11:51.204 09:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:02:YWRjNjI3ODhkYTMzZGYyZTg3ZTViMGNlNjE0MTdhODdhMDA5YWY2NmM2YzY0NmFiEctxFQ==: --dhchap-ctrl-secret DHHC-1:01:NDJkOTVmODBiNTlmNTFjMjQ4NGRkZWEwODA5NjFiNWT3vTE2: 00:11:52.184 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:52.184 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:52.184 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:11:52.184 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.184 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.184 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.184 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:52.184 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:52.184 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:52.184 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:11:52.184 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:52.184 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:52.184 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:52.184 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:52.184 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:52.184 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key3 00:11:52.184 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.184 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.184 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.184 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:52.184 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:52.184 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:52.443 00:11:52.443 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:52.443 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:52.443 09:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:52.702 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:52.702 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:52.702 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.702 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.702 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.702 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:52.702 { 00:11:52.702 "cntlid": 111, 00:11:52.702 "qid": 0, 00:11:52.702 "state": "enabled", 00:11:52.702 "thread": "nvmf_tgt_poll_group_000", 00:11:52.702 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:11:52.702 "listen_address": { 00:11:52.702 "trtype": "TCP", 00:11:52.702 "adrfam": "IPv4", 00:11:52.702 "traddr": "10.0.0.3", 00:11:52.702 "trsvcid": "4420" 00:11:52.702 }, 00:11:52.702 "peer_address": { 00:11:52.702 "trtype": "TCP", 00:11:52.702 "adrfam": "IPv4", 00:11:52.702 "traddr": "10.0.0.1", 00:11:52.702 "trsvcid": "40480" 00:11:52.702 }, 00:11:52.702 "auth": { 00:11:52.702 "state": "completed", 00:11:52.702 "digest": "sha512", 00:11:52.702 "dhgroup": "ffdhe2048" 00:11:52.702 } 00:11:52.702 } 00:11:52.702 ]' 00:11:52.702 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:52.960 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:52.960 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:52.960 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:52.960 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:52.960 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:52.960 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:52.960 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:53.218 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGM1ZTk2NmY2YWMyNDcxMDFhOWMxZGZmOTcyNmVmMzJkM2RjMWY2ODU2OTI5NjYxZDYxNDkxMjNiZWFmOGVmMrN9cww=: 00:11:53.218 09:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:03:MGM1ZTk2NmY2YWMyNDcxMDFhOWMxZGZmOTcyNmVmMzJkM2RjMWY2ODU2OTI5NjYxZDYxNDkxMjNiZWFmOGVmMrN9cww=: 00:11:53.786 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:53.786 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:53.786 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:11:53.786 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.786 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.786 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.786 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:53.786 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:53.786 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:53.786 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:54.045 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:11:54.045 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:54.045 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:54.045 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:54.045 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:54.045 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:54.045 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:54.045 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.045 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.045 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.045 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:54.045 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:54.045 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:54.304 00:11:54.304 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:54.304 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:54.304 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:54.563 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:54.563 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:54.563 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.563 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.563 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.563 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:54.563 { 00:11:54.563 "cntlid": 113, 00:11:54.563 "qid": 0, 00:11:54.563 "state": "enabled", 00:11:54.563 "thread": "nvmf_tgt_poll_group_000", 00:11:54.563 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:11:54.563 "listen_address": { 00:11:54.563 "trtype": "TCP", 00:11:54.563 "adrfam": "IPv4", 00:11:54.563 "traddr": "10.0.0.3", 00:11:54.563 "trsvcid": "4420" 00:11:54.563 }, 00:11:54.563 "peer_address": { 00:11:54.563 "trtype": "TCP", 00:11:54.563 "adrfam": "IPv4", 00:11:54.563 "traddr": "10.0.0.1", 00:11:54.563 "trsvcid": "36720" 00:11:54.563 }, 00:11:54.563 "auth": { 00:11:54.563 "state": "completed", 00:11:54.563 "digest": "sha512", 00:11:54.563 "dhgroup": "ffdhe3072" 00:11:54.563 } 00:11:54.563 } 00:11:54.563 ]' 00:11:54.563 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:54.563 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:54.563 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:54.822 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:54.822 09:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:54.822 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:54.822 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:54.822 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:55.080 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGUxYmE4MDI0ODdlYzk1ZjI0OGIzMmY5NzE1MGEzMTZlZGU1NDgxNjA1ODVhNzE3bCjLEA==: --dhchap-ctrl-secret DHHC-1:03:ZjZhY2QxOGQxNTQzY2M4NGMxYTk1MzRiYTUxNjllMTljNWU4YTVkNmVmNDUyODZmMjRjYTUyNTEyMDA1MGYxZkGtyhU=: 00:11:55.081 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:00:MGUxYmE4MDI0ODdlYzk1ZjI0OGIzMmY5NzE1MGEzMTZlZGU1NDgxNjA1ODVhNzE3bCjLEA==: --dhchap-ctrl-secret DHHC-1:03:ZjZhY2QxOGQxNTQzY2M4NGMxYTk1MzRiYTUxNjllMTljNWU4YTVkNmVmNDUyODZmMjRjYTUyNTEyMDA1MGYxZkGtyhU=: 00:11:55.648 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:55.648 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:55.648 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:11:55.648 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.648 09:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.648 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.648 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:55.648 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:55.648 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:55.907 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:11:55.907 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:55.907 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:55.907 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:55.907 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:55.907 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:55.907 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:55.907 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.907 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.907 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.907 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:55.907 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:55.907 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:56.474 00:11:56.475 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:56.475 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:56.475 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:56.734 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:56.734 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:56.734 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.734 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.734 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.734 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:56.734 { 00:11:56.734 "cntlid": 115, 00:11:56.734 "qid": 0, 00:11:56.734 "state": "enabled", 00:11:56.734 "thread": "nvmf_tgt_poll_group_000", 00:11:56.734 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:11:56.734 "listen_address": { 00:11:56.734 "trtype": "TCP", 00:11:56.734 "adrfam": "IPv4", 00:11:56.734 "traddr": "10.0.0.3", 00:11:56.734 "trsvcid": "4420" 00:11:56.734 }, 00:11:56.734 "peer_address": { 00:11:56.734 "trtype": "TCP", 00:11:56.734 "adrfam": "IPv4", 00:11:56.734 "traddr": "10.0.0.1", 00:11:56.734 "trsvcid": "36744" 00:11:56.734 }, 00:11:56.734 "auth": { 00:11:56.734 "state": "completed", 00:11:56.734 "digest": "sha512", 00:11:56.734 "dhgroup": "ffdhe3072" 00:11:56.734 } 00:11:56.734 } 00:11:56.734 ]' 00:11:56.734 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:56.734 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:56.734 09:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:56.734 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:56.734 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:56.734 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:56.734 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:56.734 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:56.993 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWQ0MDhjMDczNDdmMzk4NTYzN2Y1M2IwZjAyNDYyZjCKlYWr: --dhchap-ctrl-secret DHHC-1:02:MzE5OWQ5OGZjYmI1ZGFlMTViN2FhY2E1Nzc4NzU5NThkNGRkYmU3OGM2YjdiZmI5Mk3gaA==: 00:11:56.993 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:01:YWQ0MDhjMDczNDdmMzk4NTYzN2Y1M2IwZjAyNDYyZjCKlYWr: --dhchap-ctrl-secret DHHC-1:02:MzE5OWQ5OGZjYmI1ZGFlMTViN2FhY2E1Nzc4NzU5NThkNGRkYmU3OGM2YjdiZmI5Mk3gaA==: 00:11:57.561 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:57.561 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:57.561 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:11:57.561 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.561 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.561 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.561 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:57.561 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:57.561 09:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:57.820 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:11:57.820 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:57.820 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:57.820 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:57.820 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:57.820 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:57.820 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:57.820 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.820 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.820 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.820 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:57.820 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:57.820 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:58.078 00:11:58.078 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:58.078 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:58.078 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:58.337 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:58.337 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:58.337 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.337 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.596 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.596 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:58.596 { 00:11:58.596 "cntlid": 117, 00:11:58.596 "qid": 0, 00:11:58.596 "state": "enabled", 00:11:58.596 "thread": "nvmf_tgt_poll_group_000", 00:11:58.596 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:11:58.596 "listen_address": { 00:11:58.596 "trtype": "TCP", 00:11:58.596 "adrfam": "IPv4", 00:11:58.596 "traddr": "10.0.0.3", 00:11:58.596 "trsvcid": "4420" 00:11:58.596 }, 00:11:58.596 "peer_address": { 00:11:58.596 "trtype": "TCP", 00:11:58.596 "adrfam": "IPv4", 00:11:58.596 "traddr": "10.0.0.1", 00:11:58.596 "trsvcid": "36762" 00:11:58.596 }, 00:11:58.596 "auth": { 00:11:58.596 "state": "completed", 00:11:58.596 "digest": "sha512", 00:11:58.596 "dhgroup": "ffdhe3072" 00:11:58.596 } 00:11:58.596 } 00:11:58.596 ]' 00:11:58.596 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:58.596 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:58.596 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:58.596 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:58.596 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:58.596 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:58.596 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:58.596 09:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:58.855 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWRjNjI3ODhkYTMzZGYyZTg3ZTViMGNlNjE0MTdhODdhMDA5YWY2NmM2YzY0NmFiEctxFQ==: --dhchap-ctrl-secret DHHC-1:01:NDJkOTVmODBiNTlmNTFjMjQ4NGRkZWEwODA5NjFiNWT3vTE2: 00:11:58.855 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:02:YWRjNjI3ODhkYTMzZGYyZTg3ZTViMGNlNjE0MTdhODdhMDA5YWY2NmM2YzY0NmFiEctxFQ==: --dhchap-ctrl-secret DHHC-1:01:NDJkOTVmODBiNTlmNTFjMjQ4NGRkZWEwODA5NjFiNWT3vTE2: 00:11:59.423 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:59.423 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:59.423 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:11:59.423 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.423 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.423 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.423 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:59.423 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:59.423 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:59.682 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:11:59.682 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:59.682 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:59.682 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:59.682 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:59.682 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:59.682 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key3 00:11:59.682 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.682 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.682 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.682 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:59.682 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:59.682 09:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:59.943 00:11:59.943 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:59.943 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:59.943 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:00.214 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:00.214 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:00.214 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.214 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.214 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.214 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:00.214 { 00:12:00.214 "cntlid": 119, 00:12:00.214 "qid": 0, 00:12:00.214 "state": "enabled", 00:12:00.214 "thread": "nvmf_tgt_poll_group_000", 00:12:00.214 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:12:00.214 "listen_address": { 00:12:00.214 "trtype": "TCP", 00:12:00.214 "adrfam": "IPv4", 00:12:00.214 "traddr": "10.0.0.3", 00:12:00.214 "trsvcid": "4420" 00:12:00.214 }, 00:12:00.214 "peer_address": { 00:12:00.214 "trtype": "TCP", 00:12:00.214 "adrfam": "IPv4", 00:12:00.214 "traddr": "10.0.0.1", 00:12:00.214 "trsvcid": "36792" 00:12:00.214 }, 00:12:00.214 "auth": { 00:12:00.214 "state": "completed", 00:12:00.214 "digest": "sha512", 00:12:00.214 "dhgroup": "ffdhe3072" 00:12:00.214 } 00:12:00.214 } 00:12:00.214 ]' 00:12:00.214 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:00.486 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:00.486 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:00.486 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:00.486 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:00.486 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:00.486 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:00.486 09:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:00.745 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGM1ZTk2NmY2YWMyNDcxMDFhOWMxZGZmOTcyNmVmMzJkM2RjMWY2ODU2OTI5NjYxZDYxNDkxMjNiZWFmOGVmMrN9cww=: 00:12:00.745 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:03:MGM1ZTk2NmY2YWMyNDcxMDFhOWMxZGZmOTcyNmVmMzJkM2RjMWY2ODU2OTI5NjYxZDYxNDkxMjNiZWFmOGVmMrN9cww=: 00:12:01.312 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:01.312 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:01.312 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:12:01.312 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.312 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.312 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.312 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:01.312 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:01.312 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:01.312 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:01.572 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:12:01.572 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:01.572 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:01.572 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:01.572 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:01.572 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:01.572 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:01.572 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.572 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.572 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.572 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:01.572 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:01.572 09:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:01.831 00:12:02.091 09:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:02.091 09:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:02.091 09:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:02.091 09:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:02.091 09:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:02.091 09:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.091 09:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.091 09:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.091 09:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:02.091 { 00:12:02.091 "cntlid": 121, 00:12:02.091 "qid": 0, 00:12:02.091 "state": "enabled", 00:12:02.091 "thread": "nvmf_tgt_poll_group_000", 00:12:02.091 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:12:02.091 "listen_address": { 00:12:02.091 "trtype": "TCP", 00:12:02.091 "adrfam": "IPv4", 00:12:02.091 "traddr": "10.0.0.3", 00:12:02.091 "trsvcid": "4420" 00:12:02.091 }, 00:12:02.091 "peer_address": { 00:12:02.091 "trtype": "TCP", 00:12:02.091 "adrfam": "IPv4", 00:12:02.091 "traddr": "10.0.0.1", 00:12:02.091 "trsvcid": "36814" 00:12:02.091 }, 00:12:02.091 "auth": { 00:12:02.091 "state": "completed", 00:12:02.091 "digest": "sha512", 00:12:02.091 "dhgroup": "ffdhe4096" 00:12:02.091 } 00:12:02.091 } 00:12:02.091 ]' 00:12:02.091 09:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:02.350 09:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:02.351 09:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:02.351 09:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:02.351 09:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:02.351 09:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:02.351 09:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:02.351 09:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:02.609 09:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGUxYmE4MDI0ODdlYzk1ZjI0OGIzMmY5NzE1MGEzMTZlZGU1NDgxNjA1ODVhNzE3bCjLEA==: --dhchap-ctrl-secret DHHC-1:03:ZjZhY2QxOGQxNTQzY2M4NGMxYTk1MzRiYTUxNjllMTljNWU4YTVkNmVmNDUyODZmMjRjYTUyNTEyMDA1MGYxZkGtyhU=: 00:12:02.610 09:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:00:MGUxYmE4MDI0ODdlYzk1ZjI0OGIzMmY5NzE1MGEzMTZlZGU1NDgxNjA1ODVhNzE3bCjLEA==: --dhchap-ctrl-secret DHHC-1:03:ZjZhY2QxOGQxNTQzY2M4NGMxYTk1MzRiYTUxNjllMTljNWU4YTVkNmVmNDUyODZmMjRjYTUyNTEyMDA1MGYxZkGtyhU=: 00:12:03.177 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:03.177 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:03.177 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:12:03.177 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.177 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.177 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.177 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:03.177 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:03.177 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:03.436 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:12:03.436 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:03.436 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:03.436 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:03.436 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:03.436 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:03.436 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:03.436 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.436 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.436 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.436 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:03.436 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:03.436 09:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:03.695 00:12:03.954 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:03.954 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:03.954 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:04.212 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:04.212 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:04.212 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.212 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.212 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.212 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:04.212 { 00:12:04.212 "cntlid": 123, 00:12:04.212 "qid": 0, 00:12:04.212 "state": "enabled", 00:12:04.212 "thread": "nvmf_tgt_poll_group_000", 00:12:04.212 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:12:04.212 "listen_address": { 00:12:04.212 "trtype": "TCP", 00:12:04.212 "adrfam": "IPv4", 00:12:04.212 "traddr": "10.0.0.3", 00:12:04.212 "trsvcid": "4420" 00:12:04.212 }, 00:12:04.212 "peer_address": { 00:12:04.212 "trtype": "TCP", 00:12:04.212 "adrfam": "IPv4", 00:12:04.212 "traddr": "10.0.0.1", 00:12:04.212 "trsvcid": "47474" 00:12:04.212 }, 00:12:04.212 "auth": { 00:12:04.212 "state": "completed", 00:12:04.212 "digest": "sha512", 00:12:04.212 "dhgroup": "ffdhe4096" 00:12:04.212 } 00:12:04.212 } 00:12:04.212 ]' 00:12:04.212 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:04.212 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:04.212 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:04.212 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:04.212 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:04.212 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:04.212 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:04.212 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:04.471 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWQ0MDhjMDczNDdmMzk4NTYzN2Y1M2IwZjAyNDYyZjCKlYWr: --dhchap-ctrl-secret DHHC-1:02:MzE5OWQ5OGZjYmI1ZGFlMTViN2FhY2E1Nzc4NzU5NThkNGRkYmU3OGM2YjdiZmI5Mk3gaA==: 00:12:04.471 09:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:01:YWQ0MDhjMDczNDdmMzk4NTYzN2Y1M2IwZjAyNDYyZjCKlYWr: --dhchap-ctrl-secret DHHC-1:02:MzE5OWQ5OGZjYmI1ZGFlMTViN2FhY2E1Nzc4NzU5NThkNGRkYmU3OGM2YjdiZmI5Mk3gaA==: 00:12:05.408 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:05.408 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:05.408 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:12:05.408 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.408 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.408 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.408 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:05.408 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:05.408 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:05.408 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:12:05.408 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:05.408 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:05.408 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:05.408 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:05.408 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:05.408 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:05.408 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.408 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.408 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.408 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:05.408 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:05.408 09:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:05.974 00:12:05.974 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:05.974 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:05.974 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:06.231 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:06.231 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:06.231 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.231 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.231 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.231 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:06.231 { 00:12:06.231 "cntlid": 125, 00:12:06.231 "qid": 0, 00:12:06.231 "state": "enabled", 00:12:06.231 "thread": "nvmf_tgt_poll_group_000", 00:12:06.231 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:12:06.231 "listen_address": { 00:12:06.231 "trtype": "TCP", 00:12:06.231 "adrfam": "IPv4", 00:12:06.231 "traddr": "10.0.0.3", 00:12:06.231 "trsvcid": "4420" 00:12:06.231 }, 00:12:06.231 "peer_address": { 00:12:06.231 "trtype": "TCP", 00:12:06.231 "adrfam": "IPv4", 00:12:06.231 "traddr": "10.0.0.1", 00:12:06.231 "trsvcid": "47504" 00:12:06.231 }, 00:12:06.231 "auth": { 00:12:06.231 "state": "completed", 00:12:06.231 "digest": "sha512", 00:12:06.231 "dhgroup": "ffdhe4096" 00:12:06.231 } 00:12:06.231 } 00:12:06.231 ]' 00:12:06.231 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:06.231 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:06.231 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:06.231 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:06.231 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:06.231 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:06.231 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:06.231 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:06.490 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWRjNjI3ODhkYTMzZGYyZTg3ZTViMGNlNjE0MTdhODdhMDA5YWY2NmM2YzY0NmFiEctxFQ==: --dhchap-ctrl-secret DHHC-1:01:NDJkOTVmODBiNTlmNTFjMjQ4NGRkZWEwODA5NjFiNWT3vTE2: 00:12:06.490 09:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:02:YWRjNjI3ODhkYTMzZGYyZTg3ZTViMGNlNjE0MTdhODdhMDA5YWY2NmM2YzY0NmFiEctxFQ==: --dhchap-ctrl-secret DHHC-1:01:NDJkOTVmODBiNTlmNTFjMjQ4NGRkZWEwODA5NjFiNWT3vTE2: 00:12:07.057 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:07.057 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:07.057 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:12:07.057 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.057 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.057 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.057 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:07.057 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:07.057 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:07.625 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:12:07.625 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:07.625 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:07.625 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:07.625 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:07.625 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:07.625 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key3 00:12:07.625 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.625 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.625 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.625 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:07.625 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:07.625 09:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:07.883 00:12:07.883 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:07.883 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:07.883 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:08.142 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:08.142 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:08.142 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.142 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.142 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.142 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:08.142 { 00:12:08.142 "cntlid": 127, 00:12:08.142 "qid": 0, 00:12:08.142 "state": "enabled", 00:12:08.142 "thread": "nvmf_tgt_poll_group_000", 00:12:08.142 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:12:08.142 "listen_address": { 00:12:08.142 "trtype": "TCP", 00:12:08.142 "adrfam": "IPv4", 00:12:08.142 "traddr": "10.0.0.3", 00:12:08.142 "trsvcid": "4420" 00:12:08.142 }, 00:12:08.142 "peer_address": { 00:12:08.142 "trtype": "TCP", 00:12:08.142 "adrfam": "IPv4", 00:12:08.142 "traddr": "10.0.0.1", 00:12:08.142 "trsvcid": "47532" 00:12:08.142 }, 00:12:08.142 "auth": { 00:12:08.142 "state": "completed", 00:12:08.142 "digest": "sha512", 00:12:08.142 "dhgroup": "ffdhe4096" 00:12:08.142 } 00:12:08.142 } 00:12:08.142 ]' 00:12:08.142 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:08.142 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:08.142 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:08.142 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:08.142 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:08.142 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:08.143 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:08.143 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:08.416 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGM1ZTk2NmY2YWMyNDcxMDFhOWMxZGZmOTcyNmVmMzJkM2RjMWY2ODU2OTI5NjYxZDYxNDkxMjNiZWFmOGVmMrN9cww=: 00:12:08.416 09:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:03:MGM1ZTk2NmY2YWMyNDcxMDFhOWMxZGZmOTcyNmVmMzJkM2RjMWY2ODU2OTI5NjYxZDYxNDkxMjNiZWFmOGVmMrN9cww=: 00:12:08.996 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:08.996 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:08.996 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:12:08.996 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.996 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.996 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.996 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:08.996 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:08.996 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:08.996 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:09.255 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:12:09.255 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:09.255 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:09.255 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:09.255 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:09.255 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:09.255 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:09.255 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.255 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.255 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.255 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:09.255 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:09.255 09:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:09.823 00:12:09.823 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:09.823 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:09.823 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:10.083 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:10.083 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:10.083 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.083 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.083 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.083 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:10.083 { 00:12:10.083 "cntlid": 129, 00:12:10.083 "qid": 0, 00:12:10.083 "state": "enabled", 00:12:10.083 "thread": "nvmf_tgt_poll_group_000", 00:12:10.083 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:12:10.083 "listen_address": { 00:12:10.083 "trtype": "TCP", 00:12:10.083 "adrfam": "IPv4", 00:12:10.083 "traddr": "10.0.0.3", 00:12:10.083 "trsvcid": "4420" 00:12:10.083 }, 00:12:10.083 "peer_address": { 00:12:10.083 "trtype": "TCP", 00:12:10.083 "adrfam": "IPv4", 00:12:10.083 "traddr": "10.0.0.1", 00:12:10.083 "trsvcid": "47566" 00:12:10.083 }, 00:12:10.083 "auth": { 00:12:10.083 "state": "completed", 00:12:10.083 "digest": "sha512", 00:12:10.083 "dhgroup": "ffdhe6144" 00:12:10.083 } 00:12:10.083 } 00:12:10.083 ]' 00:12:10.083 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:10.083 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:10.083 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:10.083 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:10.083 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:10.083 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:10.083 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:10.083 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:10.342 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGUxYmE4MDI0ODdlYzk1ZjI0OGIzMmY5NzE1MGEzMTZlZGU1NDgxNjA1ODVhNzE3bCjLEA==: --dhchap-ctrl-secret DHHC-1:03:ZjZhY2QxOGQxNTQzY2M4NGMxYTk1MzRiYTUxNjllMTljNWU4YTVkNmVmNDUyODZmMjRjYTUyNTEyMDA1MGYxZkGtyhU=: 00:12:10.342 09:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:00:MGUxYmE4MDI0ODdlYzk1ZjI0OGIzMmY5NzE1MGEzMTZlZGU1NDgxNjA1ODVhNzE3bCjLEA==: --dhchap-ctrl-secret DHHC-1:03:ZjZhY2QxOGQxNTQzY2M4NGMxYTk1MzRiYTUxNjllMTljNWU4YTVkNmVmNDUyODZmMjRjYTUyNTEyMDA1MGYxZkGtyhU=: 00:12:10.910 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:10.910 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:10.910 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:12:10.910 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.910 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.910 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.910 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:10.910 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:10.910 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:11.169 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:12:11.169 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:11.169 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:11.169 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:11.169 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:11.169 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:11.169 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.169 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.169 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.169 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.169 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.169 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.169 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.737 00:12:11.737 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:11.737 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:11.737 09:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:11.995 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:11.995 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:11.995 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.995 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.995 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.995 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:11.995 { 00:12:11.995 "cntlid": 131, 00:12:11.995 "qid": 0, 00:12:11.995 "state": "enabled", 00:12:11.995 "thread": "nvmf_tgt_poll_group_000", 00:12:11.995 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:12:11.995 "listen_address": { 00:12:11.995 "trtype": "TCP", 00:12:11.995 "adrfam": "IPv4", 00:12:11.995 "traddr": "10.0.0.3", 00:12:11.995 "trsvcid": "4420" 00:12:11.995 }, 00:12:11.995 "peer_address": { 00:12:11.995 "trtype": "TCP", 00:12:11.995 "adrfam": "IPv4", 00:12:11.995 "traddr": "10.0.0.1", 00:12:11.995 "trsvcid": "47584" 00:12:11.995 }, 00:12:11.995 "auth": { 00:12:11.995 "state": "completed", 00:12:11.995 "digest": "sha512", 00:12:11.995 "dhgroup": "ffdhe6144" 00:12:11.995 } 00:12:11.995 } 00:12:11.995 ]' 00:12:11.995 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:11.995 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:11.995 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:11.995 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:11.995 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:12.254 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:12.254 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:12.254 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:12.513 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWQ0MDhjMDczNDdmMzk4NTYzN2Y1M2IwZjAyNDYyZjCKlYWr: --dhchap-ctrl-secret DHHC-1:02:MzE5OWQ5OGZjYmI1ZGFlMTViN2FhY2E1Nzc4NzU5NThkNGRkYmU3OGM2YjdiZmI5Mk3gaA==: 00:12:12.513 09:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:01:YWQ0MDhjMDczNDdmMzk4NTYzN2Y1M2IwZjAyNDYyZjCKlYWr: --dhchap-ctrl-secret DHHC-1:02:MzE5OWQ5OGZjYmI1ZGFlMTViN2FhY2E1Nzc4NzU5NThkNGRkYmU3OGM2YjdiZmI5Mk3gaA==: 00:12:13.080 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:13.080 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:13.080 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:12:13.080 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.080 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.080 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.080 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:13.080 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:13.080 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:13.338 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:12:13.338 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:13.338 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:13.338 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:13.338 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:13.338 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:13.338 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:13.338 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.338 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.338 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.338 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:13.338 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:13.338 09:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:13.905 00:12:13.905 09:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:13.905 09:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:13.905 09:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:14.164 09:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:14.164 09:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:14.164 09:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.164 09:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.164 09:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.164 09:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:14.164 { 00:12:14.164 "cntlid": 133, 00:12:14.164 "qid": 0, 00:12:14.164 "state": "enabled", 00:12:14.164 "thread": "nvmf_tgt_poll_group_000", 00:12:14.164 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:12:14.164 "listen_address": { 00:12:14.164 "trtype": "TCP", 00:12:14.164 "adrfam": "IPv4", 00:12:14.164 "traddr": "10.0.0.3", 00:12:14.164 "trsvcid": "4420" 00:12:14.164 }, 00:12:14.164 "peer_address": { 00:12:14.164 "trtype": "TCP", 00:12:14.164 "adrfam": "IPv4", 00:12:14.164 "traddr": "10.0.0.1", 00:12:14.164 "trsvcid": "35038" 00:12:14.164 }, 00:12:14.164 "auth": { 00:12:14.164 "state": "completed", 00:12:14.164 "digest": "sha512", 00:12:14.164 "dhgroup": "ffdhe6144" 00:12:14.164 } 00:12:14.164 } 00:12:14.164 ]' 00:12:14.164 09:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:14.164 09:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:14.164 09:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:14.164 09:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:14.164 09:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:14.164 09:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:14.164 09:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:14.164 09:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:14.732 09:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWRjNjI3ODhkYTMzZGYyZTg3ZTViMGNlNjE0MTdhODdhMDA5YWY2NmM2YzY0NmFiEctxFQ==: --dhchap-ctrl-secret DHHC-1:01:NDJkOTVmODBiNTlmNTFjMjQ4NGRkZWEwODA5NjFiNWT3vTE2: 00:12:14.732 09:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:02:YWRjNjI3ODhkYTMzZGYyZTg3ZTViMGNlNjE0MTdhODdhMDA5YWY2NmM2YzY0NmFiEctxFQ==: --dhchap-ctrl-secret DHHC-1:01:NDJkOTVmODBiNTlmNTFjMjQ4NGRkZWEwODA5NjFiNWT3vTE2: 00:12:14.991 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:15.250 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:15.250 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:12:15.250 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.250 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.250 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.250 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:15.250 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:15.251 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:15.510 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:12:15.510 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:15.510 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:15.510 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:15.510 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:15.510 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:15.510 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key3 00:12:15.510 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.510 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.510 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.510 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:15.510 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:15.510 09:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:15.769 00:12:15.769 09:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:15.769 09:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:15.769 09:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:16.028 09:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:16.028 09:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:16.028 09:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.028 09:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.028 09:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.028 09:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:16.028 { 00:12:16.028 "cntlid": 135, 00:12:16.028 "qid": 0, 00:12:16.028 "state": "enabled", 00:12:16.028 "thread": "nvmf_tgt_poll_group_000", 00:12:16.028 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:12:16.028 "listen_address": { 00:12:16.028 "trtype": "TCP", 00:12:16.028 "adrfam": "IPv4", 00:12:16.028 "traddr": "10.0.0.3", 00:12:16.028 "trsvcid": "4420" 00:12:16.028 }, 00:12:16.028 "peer_address": { 00:12:16.028 "trtype": "TCP", 00:12:16.028 "adrfam": "IPv4", 00:12:16.028 "traddr": "10.0.0.1", 00:12:16.028 "trsvcid": "35064" 00:12:16.028 }, 00:12:16.028 "auth": { 00:12:16.028 "state": "completed", 00:12:16.028 "digest": "sha512", 00:12:16.028 "dhgroup": "ffdhe6144" 00:12:16.028 } 00:12:16.028 } 00:12:16.028 ]' 00:12:16.028 09:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:16.028 09:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:16.028 09:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:16.289 09:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:16.289 09:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:16.289 09:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:16.289 09:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:16.289 09:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:16.548 09:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGM1ZTk2NmY2YWMyNDcxMDFhOWMxZGZmOTcyNmVmMzJkM2RjMWY2ODU2OTI5NjYxZDYxNDkxMjNiZWFmOGVmMrN9cww=: 00:12:16.548 09:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:03:MGM1ZTk2NmY2YWMyNDcxMDFhOWMxZGZmOTcyNmVmMzJkM2RjMWY2ODU2OTI5NjYxZDYxNDkxMjNiZWFmOGVmMrN9cww=: 00:12:17.146 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:17.147 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:17.147 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:12:17.147 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.147 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.147 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.147 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:17.147 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:17.147 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:17.147 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:17.406 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:12:17.406 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:17.406 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:17.406 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:17.406 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:17.406 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:17.406 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:17.406 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.406 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.406 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.406 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:17.406 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:17.406 09:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:17.974 00:12:17.974 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:17.974 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:17.974 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:18.233 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:18.233 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:18.233 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.233 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.233 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.233 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:18.233 { 00:12:18.233 "cntlid": 137, 00:12:18.233 "qid": 0, 00:12:18.233 "state": "enabled", 00:12:18.233 "thread": "nvmf_tgt_poll_group_000", 00:12:18.233 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:12:18.233 "listen_address": { 00:12:18.233 "trtype": "TCP", 00:12:18.233 "adrfam": "IPv4", 00:12:18.233 "traddr": "10.0.0.3", 00:12:18.233 "trsvcid": "4420" 00:12:18.233 }, 00:12:18.233 "peer_address": { 00:12:18.233 "trtype": "TCP", 00:12:18.233 "adrfam": "IPv4", 00:12:18.233 "traddr": "10.0.0.1", 00:12:18.233 "trsvcid": "35104" 00:12:18.233 }, 00:12:18.233 "auth": { 00:12:18.233 "state": "completed", 00:12:18.233 "digest": "sha512", 00:12:18.233 "dhgroup": "ffdhe8192" 00:12:18.233 } 00:12:18.233 } 00:12:18.233 ]' 00:12:18.233 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:18.233 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:18.233 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:18.233 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:18.233 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:18.233 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:18.233 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:18.233 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:18.492 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGUxYmE4MDI0ODdlYzk1ZjI0OGIzMmY5NzE1MGEzMTZlZGU1NDgxNjA1ODVhNzE3bCjLEA==: --dhchap-ctrl-secret DHHC-1:03:ZjZhY2QxOGQxNTQzY2M4NGMxYTk1MzRiYTUxNjllMTljNWU4YTVkNmVmNDUyODZmMjRjYTUyNTEyMDA1MGYxZkGtyhU=: 00:12:18.492 09:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:00:MGUxYmE4MDI0ODdlYzk1ZjI0OGIzMmY5NzE1MGEzMTZlZGU1NDgxNjA1ODVhNzE3bCjLEA==: --dhchap-ctrl-secret DHHC-1:03:ZjZhY2QxOGQxNTQzY2M4NGMxYTk1MzRiYTUxNjllMTljNWU4YTVkNmVmNDUyODZmMjRjYTUyNTEyMDA1MGYxZkGtyhU=: 00:12:19.060 09:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:19.060 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:19.060 09:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:12:19.060 09:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.060 09:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.060 09:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.060 09:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:19.060 09:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:19.060 09:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:19.319 09:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:12:19.319 09:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:19.319 09:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:19.319 09:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:19.319 09:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:19.319 09:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:19.319 09:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:19.319 09:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.319 09:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.319 09:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.319 09:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:19.319 09:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:19.319 09:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:19.887 00:12:19.887 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:19.887 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:19.887 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:20.455 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:20.455 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:20.455 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.455 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.455 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.455 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:20.455 { 00:12:20.455 "cntlid": 139, 00:12:20.455 "qid": 0, 00:12:20.455 "state": "enabled", 00:12:20.455 "thread": "nvmf_tgt_poll_group_000", 00:12:20.455 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:12:20.455 "listen_address": { 00:12:20.455 "trtype": "TCP", 00:12:20.455 "adrfam": "IPv4", 00:12:20.455 "traddr": "10.0.0.3", 00:12:20.455 "trsvcid": "4420" 00:12:20.455 }, 00:12:20.455 "peer_address": { 00:12:20.455 "trtype": "TCP", 00:12:20.455 "adrfam": "IPv4", 00:12:20.455 "traddr": "10.0.0.1", 00:12:20.455 "trsvcid": "35122" 00:12:20.455 }, 00:12:20.455 "auth": { 00:12:20.455 "state": "completed", 00:12:20.455 "digest": "sha512", 00:12:20.455 "dhgroup": "ffdhe8192" 00:12:20.455 } 00:12:20.455 } 00:12:20.455 ]' 00:12:20.455 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:20.455 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:20.455 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:20.455 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:20.455 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:20.455 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:20.455 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:20.455 09:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:20.714 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWQ0MDhjMDczNDdmMzk4NTYzN2Y1M2IwZjAyNDYyZjCKlYWr: --dhchap-ctrl-secret DHHC-1:02:MzE5OWQ5OGZjYmI1ZGFlMTViN2FhY2E1Nzc4NzU5NThkNGRkYmU3OGM2YjdiZmI5Mk3gaA==: 00:12:20.714 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:01:YWQ0MDhjMDczNDdmMzk4NTYzN2Y1M2IwZjAyNDYyZjCKlYWr: --dhchap-ctrl-secret DHHC-1:02:MzE5OWQ5OGZjYmI1ZGFlMTViN2FhY2E1Nzc4NzU5NThkNGRkYmU3OGM2YjdiZmI5Mk3gaA==: 00:12:21.281 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:21.281 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:21.281 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:12:21.281 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.281 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.281 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.281 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:21.281 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:21.281 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:21.848 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:12:21.848 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:21.848 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:21.848 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:21.848 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:21.848 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:21.848 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:21.848 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.848 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.848 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.848 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:21.848 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:21.848 09:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:22.416 00:12:22.416 09:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:22.416 09:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:22.416 09:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:22.416 09:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:22.416 09:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:22.416 09:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.416 09:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.416 09:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.416 09:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:22.416 { 00:12:22.416 "cntlid": 141, 00:12:22.416 "qid": 0, 00:12:22.416 "state": "enabled", 00:12:22.416 "thread": "nvmf_tgt_poll_group_000", 00:12:22.416 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:12:22.416 "listen_address": { 00:12:22.416 "trtype": "TCP", 00:12:22.416 "adrfam": "IPv4", 00:12:22.416 "traddr": "10.0.0.3", 00:12:22.416 "trsvcid": "4420" 00:12:22.416 }, 00:12:22.416 "peer_address": { 00:12:22.416 "trtype": "TCP", 00:12:22.416 "adrfam": "IPv4", 00:12:22.416 "traddr": "10.0.0.1", 00:12:22.416 "trsvcid": "35160" 00:12:22.416 }, 00:12:22.416 "auth": { 00:12:22.416 "state": "completed", 00:12:22.416 "digest": "sha512", 00:12:22.416 "dhgroup": "ffdhe8192" 00:12:22.416 } 00:12:22.416 } 00:12:22.416 ]' 00:12:22.416 09:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:22.674 09:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:22.674 09:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:22.674 09:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:22.674 09:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:22.674 09:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:22.674 09:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:22.674 09:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:22.936 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWRjNjI3ODhkYTMzZGYyZTg3ZTViMGNlNjE0MTdhODdhMDA5YWY2NmM2YzY0NmFiEctxFQ==: --dhchap-ctrl-secret DHHC-1:01:NDJkOTVmODBiNTlmNTFjMjQ4NGRkZWEwODA5NjFiNWT3vTE2: 00:12:22.936 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:02:YWRjNjI3ODhkYTMzZGYyZTg3ZTViMGNlNjE0MTdhODdhMDA5YWY2NmM2YzY0NmFiEctxFQ==: --dhchap-ctrl-secret DHHC-1:01:NDJkOTVmODBiNTlmNTFjMjQ4NGRkZWEwODA5NjFiNWT3vTE2: 00:12:23.502 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:23.502 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:23.502 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:12:23.502 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.502 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.502 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.502 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:23.502 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:23.502 09:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:23.761 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:12:23.761 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:23.761 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:23.761 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:23.761 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:23.761 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:23.761 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key3 00:12:23.761 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.761 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.761 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.761 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:23.761 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:23.761 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:24.328 00:12:24.328 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:24.328 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:24.328 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:24.588 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:24.588 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:24.588 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.588 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.588 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.588 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:24.588 { 00:12:24.588 "cntlid": 143, 00:12:24.588 "qid": 0, 00:12:24.588 "state": "enabled", 00:12:24.588 "thread": "nvmf_tgt_poll_group_000", 00:12:24.588 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:12:24.588 "listen_address": { 00:12:24.588 "trtype": "TCP", 00:12:24.588 "adrfam": "IPv4", 00:12:24.588 "traddr": "10.0.0.3", 00:12:24.588 "trsvcid": "4420" 00:12:24.588 }, 00:12:24.588 "peer_address": { 00:12:24.588 "trtype": "TCP", 00:12:24.588 "adrfam": "IPv4", 00:12:24.588 "traddr": "10.0.0.1", 00:12:24.588 "trsvcid": "35950" 00:12:24.588 }, 00:12:24.588 "auth": { 00:12:24.588 "state": "completed", 00:12:24.588 "digest": "sha512", 00:12:24.588 "dhgroup": "ffdhe8192" 00:12:24.588 } 00:12:24.588 } 00:12:24.588 ]' 00:12:24.588 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:24.588 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:24.588 09:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:24.847 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:24.847 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:24.847 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:24.847 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:24.847 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:25.142 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGM1ZTk2NmY2YWMyNDcxMDFhOWMxZGZmOTcyNmVmMzJkM2RjMWY2ODU2OTI5NjYxZDYxNDkxMjNiZWFmOGVmMrN9cww=: 00:12:25.142 09:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:03:MGM1ZTk2NmY2YWMyNDcxMDFhOWMxZGZmOTcyNmVmMzJkM2RjMWY2ODU2OTI5NjYxZDYxNDkxMjNiZWFmOGVmMrN9cww=: 00:12:25.723 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:25.723 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:25.723 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:12:25.723 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.723 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.723 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.723 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:12:25.723 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:12:25.723 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:12:25.723 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:25.723 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:25.723 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:25.983 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:12:25.983 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:25.983 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:25.983 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:25.983 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:25.983 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:25.983 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:25.983 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.983 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.983 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.983 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:25.983 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:25.983 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:26.550 00:12:26.550 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:26.550 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:26.550 09:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:26.810 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:26.810 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:26.810 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.810 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.810 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.810 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:26.810 { 00:12:26.810 "cntlid": 145, 00:12:26.810 "qid": 0, 00:12:26.810 "state": "enabled", 00:12:26.810 "thread": "nvmf_tgt_poll_group_000", 00:12:26.810 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:12:26.810 "listen_address": { 00:12:26.810 "trtype": "TCP", 00:12:26.810 "adrfam": "IPv4", 00:12:26.810 "traddr": "10.0.0.3", 00:12:26.810 "trsvcid": "4420" 00:12:26.810 }, 00:12:26.810 "peer_address": { 00:12:26.810 "trtype": "TCP", 00:12:26.810 "adrfam": "IPv4", 00:12:26.810 "traddr": "10.0.0.1", 00:12:26.810 "trsvcid": "35970" 00:12:26.810 }, 00:12:26.810 "auth": { 00:12:26.810 "state": "completed", 00:12:26.810 "digest": "sha512", 00:12:26.810 "dhgroup": "ffdhe8192" 00:12:26.810 } 00:12:26.810 } 00:12:26.810 ]' 00:12:26.810 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:26.810 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:26.810 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:26.810 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:26.810 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:27.069 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:27.069 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:27.069 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:27.328 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGUxYmE4MDI0ODdlYzk1ZjI0OGIzMmY5NzE1MGEzMTZlZGU1NDgxNjA1ODVhNzE3bCjLEA==: --dhchap-ctrl-secret DHHC-1:03:ZjZhY2QxOGQxNTQzY2M4NGMxYTk1MzRiYTUxNjllMTljNWU4YTVkNmVmNDUyODZmMjRjYTUyNTEyMDA1MGYxZkGtyhU=: 00:12:27.328 09:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:00:MGUxYmE4MDI0ODdlYzk1ZjI0OGIzMmY5NzE1MGEzMTZlZGU1NDgxNjA1ODVhNzE3bCjLEA==: --dhchap-ctrl-secret DHHC-1:03:ZjZhY2QxOGQxNTQzY2M4NGMxYTk1MzRiYTUxNjllMTljNWU4YTVkNmVmNDUyODZmMjRjYTUyNTEyMDA1MGYxZkGtyhU=: 00:12:27.896 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:27.896 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:27.896 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:12:27.896 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.896 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.896 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.896 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key1 00:12:27.896 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.896 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.896 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.896 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:12:27.896 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:27.896 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:12:27.896 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:12:27.896 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:27.896 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:12:27.896 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:27.896 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:12:27.896 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:12:27.896 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:12:28.464 request: 00:12:28.464 { 00:12:28.464 "name": "nvme0", 00:12:28.464 "trtype": "tcp", 00:12:28.464 "traddr": "10.0.0.3", 00:12:28.464 "adrfam": "ipv4", 00:12:28.464 "trsvcid": "4420", 00:12:28.464 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:28.464 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:12:28.464 "prchk_reftag": false, 00:12:28.464 "prchk_guard": false, 00:12:28.464 "hdgst": false, 00:12:28.464 "ddgst": false, 00:12:28.464 "dhchap_key": "key2", 00:12:28.464 "allow_unrecognized_csi": false, 00:12:28.464 "method": "bdev_nvme_attach_controller", 00:12:28.464 "req_id": 1 00:12:28.464 } 00:12:28.464 Got JSON-RPC error response 00:12:28.464 response: 00:12:28.464 { 00:12:28.464 "code": -5, 00:12:28.464 "message": "Input/output error" 00:12:28.464 } 00:12:28.464 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:28.464 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:28.464 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:28.464 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:28.464 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:12:28.464 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.464 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.464 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.464 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:28.464 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.464 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.464 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.464 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:28.464 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:28.464 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:28.464 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:12:28.464 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:28.464 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:12:28.464 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:28.464 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:28.464 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:28.464 09:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:29.032 request: 00:12:29.032 { 00:12:29.032 "name": "nvme0", 00:12:29.032 "trtype": "tcp", 00:12:29.032 "traddr": "10.0.0.3", 00:12:29.032 "adrfam": "ipv4", 00:12:29.032 "trsvcid": "4420", 00:12:29.032 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:29.032 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:12:29.032 "prchk_reftag": false, 00:12:29.032 "prchk_guard": false, 00:12:29.032 "hdgst": false, 00:12:29.032 "ddgst": false, 00:12:29.032 "dhchap_key": "key1", 00:12:29.032 "dhchap_ctrlr_key": "ckey2", 00:12:29.032 "allow_unrecognized_csi": false, 00:12:29.032 "method": "bdev_nvme_attach_controller", 00:12:29.032 "req_id": 1 00:12:29.032 } 00:12:29.032 Got JSON-RPC error response 00:12:29.032 response: 00:12:29.032 { 00:12:29.032 "code": -5, 00:12:29.032 "message": "Input/output error" 00:12:29.032 } 00:12:29.032 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:29.032 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:29.032 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:29.032 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:29.032 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:12:29.032 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.032 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.032 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.032 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key1 00:12:29.032 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.032 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.032 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.032 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:29.032 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:29.032 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:29.032 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:12:29.032 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:29.032 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:12:29.032 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:29.032 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:29.032 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:29.032 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:29.601 request: 00:12:29.601 { 00:12:29.601 "name": "nvme0", 00:12:29.601 "trtype": "tcp", 00:12:29.601 "traddr": "10.0.0.3", 00:12:29.601 "adrfam": "ipv4", 00:12:29.601 "trsvcid": "4420", 00:12:29.601 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:29.601 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:12:29.601 "prchk_reftag": false, 00:12:29.601 "prchk_guard": false, 00:12:29.601 "hdgst": false, 00:12:29.601 "ddgst": false, 00:12:29.601 "dhchap_key": "key1", 00:12:29.601 "dhchap_ctrlr_key": "ckey1", 00:12:29.601 "allow_unrecognized_csi": false, 00:12:29.601 "method": "bdev_nvme_attach_controller", 00:12:29.601 "req_id": 1 00:12:29.601 } 00:12:29.601 Got JSON-RPC error response 00:12:29.601 response: 00:12:29.601 { 00:12:29.601 "code": -5, 00:12:29.601 "message": "Input/output error" 00:12:29.601 } 00:12:29.601 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:29.601 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:29.601 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:29.601 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:29.601 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:12:29.601 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.601 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.601 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.601 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 67248 00:12:29.601 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 67248 ']' 00:12:29.601 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 67248 00:12:29.601 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:12:29.601 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:29.601 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67248 00:12:29.601 killing process with pid 67248 00:12:29.601 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:29.601 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:29.601 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67248' 00:12:29.601 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 67248 00:12:29.601 09:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 67248 00:12:29.861 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:12:29.861 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:29.861 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:29.861 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.861 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=70186 00:12:29.861 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 70186 00:12:29.861 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:12:29.861 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 70186 ']' 00:12:29.861 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.861 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:29.861 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.861 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:29.861 09:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.797 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:30.797 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:12:30.797 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:30.797 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:30.797 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.057 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:31.057 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:12:31.057 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 70186 00:12:31.057 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 70186 ']' 00:12:31.057 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.057 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:31.057 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.057 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:31.057 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.317 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:31.317 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:12:31.317 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:12:31.317 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.317 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.317 null0 00:12:31.317 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.317 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:31.317 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.c0z 00:12:31.317 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.317 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.317 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.317 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.USw ]] 00:12:31.317 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.USw 00:12:31.317 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.317 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.317 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.317 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:31.317 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.LeF 00:12:31.317 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.317 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.317 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.317 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.IDP ]] 00:12:31.317 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.IDP 00:12:31.317 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.317 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.317 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.317 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:31.317 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.rwW 00:12:31.317 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.317 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.317 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.317 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.Gnc ]] 00:12:31.317 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Gnc 00:12:31.317 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.317 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.317 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.317 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:31.317 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.lDX 00:12:31.317 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.317 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.317 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.317 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:12:31.317 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:12:31.317 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:31.317 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:31.317 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:31.317 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:31.317 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:31.317 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key3 00:12:31.317 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.317 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.576 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.576 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:31.576 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:31.576 09:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:32.513 nvme0n1 00:12:32.513 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:32.513 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:32.513 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:32.772 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:32.772 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:32.772 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.772 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.772 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.772 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:32.772 { 00:12:32.772 "cntlid": 1, 00:12:32.772 "qid": 0, 00:12:32.772 "state": "enabled", 00:12:32.772 "thread": "nvmf_tgt_poll_group_000", 00:12:32.772 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:12:32.772 "listen_address": { 00:12:32.772 "trtype": "TCP", 00:12:32.772 "adrfam": "IPv4", 00:12:32.772 "traddr": "10.0.0.3", 00:12:32.772 "trsvcid": "4420" 00:12:32.772 }, 00:12:32.772 "peer_address": { 00:12:32.772 "trtype": "TCP", 00:12:32.772 "adrfam": "IPv4", 00:12:32.772 "traddr": "10.0.0.1", 00:12:32.772 "trsvcid": "36024" 00:12:32.772 }, 00:12:32.772 "auth": { 00:12:32.772 "state": "completed", 00:12:32.772 "digest": "sha512", 00:12:32.772 "dhgroup": "ffdhe8192" 00:12:32.772 } 00:12:32.772 } 00:12:32.772 ]' 00:12:32.772 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:32.772 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:32.772 09:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:32.772 09:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:32.772 09:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:32.772 09:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:32.772 09:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:32.772 09:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:33.031 09:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGM1ZTk2NmY2YWMyNDcxMDFhOWMxZGZmOTcyNmVmMzJkM2RjMWY2ODU2OTI5NjYxZDYxNDkxMjNiZWFmOGVmMrN9cww=: 00:12:33.031 09:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:03:MGM1ZTk2NmY2YWMyNDcxMDFhOWMxZGZmOTcyNmVmMzJkM2RjMWY2ODU2OTI5NjYxZDYxNDkxMjNiZWFmOGVmMrN9cww=: 00:12:33.987 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:33.987 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:33.987 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:12:33.987 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.987 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.987 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.987 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key3 00:12:33.987 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.987 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.987 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.987 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:12:33.987 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:12:34.246 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:12:34.246 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:34.246 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:12:34.246 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:12:34.246 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:34.246 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:12:34.246 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:34.246 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:34.246 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:34.246 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:34.505 request: 00:12:34.505 { 00:12:34.505 "name": "nvme0", 00:12:34.505 "trtype": "tcp", 00:12:34.505 "traddr": "10.0.0.3", 00:12:34.505 "adrfam": "ipv4", 00:12:34.505 "trsvcid": "4420", 00:12:34.505 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:34.505 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:12:34.505 "prchk_reftag": false, 00:12:34.505 "prchk_guard": false, 00:12:34.505 "hdgst": false, 00:12:34.505 "ddgst": false, 00:12:34.505 "dhchap_key": "key3", 00:12:34.505 "allow_unrecognized_csi": false, 00:12:34.505 "method": "bdev_nvme_attach_controller", 00:12:34.505 "req_id": 1 00:12:34.505 } 00:12:34.505 Got JSON-RPC error response 00:12:34.505 response: 00:12:34.505 { 00:12:34.505 "code": -5, 00:12:34.505 "message": "Input/output error" 00:12:34.505 } 00:12:34.505 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:34.505 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:34.505 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:34.505 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:34.505 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:12:34.505 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:12:34.505 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:12:34.505 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:12:34.764 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:12:34.764 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:34.764 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:12:34.764 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:12:34.764 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:34.764 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:12:34.764 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:34.764 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:34.764 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:34.764 09:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:35.023 request: 00:12:35.023 { 00:12:35.023 "name": "nvme0", 00:12:35.023 "trtype": "tcp", 00:12:35.023 "traddr": "10.0.0.3", 00:12:35.023 "adrfam": "ipv4", 00:12:35.023 "trsvcid": "4420", 00:12:35.023 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:35.023 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:12:35.023 "prchk_reftag": false, 00:12:35.023 "prchk_guard": false, 00:12:35.023 "hdgst": false, 00:12:35.023 "ddgst": false, 00:12:35.023 "dhchap_key": "key3", 00:12:35.023 "allow_unrecognized_csi": false, 00:12:35.023 "method": "bdev_nvme_attach_controller", 00:12:35.023 "req_id": 1 00:12:35.023 } 00:12:35.023 Got JSON-RPC error response 00:12:35.023 response: 00:12:35.023 { 00:12:35.023 "code": -5, 00:12:35.023 "message": "Input/output error" 00:12:35.023 } 00:12:35.023 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:35.023 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:35.023 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:35.023 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:35.023 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:12:35.023 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:12:35.023 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:12:35.023 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:35.023 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:35.023 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:35.282 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:12:35.282 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.282 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.282 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.282 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:12:35.282 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.282 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.282 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.282 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:35.282 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:35.282 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:35.282 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:12:35.282 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:35.282 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:12:35.282 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:35.282 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:35.282 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:35.282 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:35.849 request: 00:12:35.849 { 00:12:35.849 "name": "nvme0", 00:12:35.849 "trtype": "tcp", 00:12:35.849 "traddr": "10.0.0.3", 00:12:35.849 "adrfam": "ipv4", 00:12:35.849 "trsvcid": "4420", 00:12:35.849 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:35.849 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:12:35.849 "prchk_reftag": false, 00:12:35.849 "prchk_guard": false, 00:12:35.849 "hdgst": false, 00:12:35.849 "ddgst": false, 00:12:35.849 "dhchap_key": "key0", 00:12:35.849 "dhchap_ctrlr_key": "key1", 00:12:35.849 "allow_unrecognized_csi": false, 00:12:35.849 "method": "bdev_nvme_attach_controller", 00:12:35.849 "req_id": 1 00:12:35.849 } 00:12:35.849 Got JSON-RPC error response 00:12:35.849 response: 00:12:35.849 { 00:12:35.849 "code": -5, 00:12:35.849 "message": "Input/output error" 00:12:35.849 } 00:12:35.849 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:35.849 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:35.849 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:35.849 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:35.849 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:12:35.849 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:12:35.850 09:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:12:35.850 nvme0n1 00:12:36.108 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:12:36.108 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:12:36.108 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:36.366 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:36.366 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:36.366 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:36.625 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key1 00:12:36.625 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.625 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.625 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.625 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:12:36.625 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:36.625 09:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:37.560 nvme0n1 00:12:37.560 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:12:37.560 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:12:37.561 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:37.561 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:37.561 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:37.561 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.561 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.561 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.561 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:12:37.561 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:12:37.561 09:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:37.819 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:37.819 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YWRjNjI3ODhkYTMzZGYyZTg3ZTViMGNlNjE0MTdhODdhMDA5YWY2NmM2YzY0NmFiEctxFQ==: --dhchap-ctrl-secret DHHC-1:03:MGM1ZTk2NmY2YWMyNDcxMDFhOWMxZGZmOTcyNmVmMzJkM2RjMWY2ODU2OTI5NjYxZDYxNDkxMjNiZWFmOGVmMrN9cww=: 00:12:37.819 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid 5989d9e2-d339-420e-a2f4-bd87604f111f -l 0 --dhchap-secret DHHC-1:02:YWRjNjI3ODhkYTMzZGYyZTg3ZTViMGNlNjE0MTdhODdhMDA5YWY2NmM2YzY0NmFiEctxFQ==: --dhchap-ctrl-secret DHHC-1:03:MGM1ZTk2NmY2YWMyNDcxMDFhOWMxZGZmOTcyNmVmMzJkM2RjMWY2ODU2OTI5NjYxZDYxNDkxMjNiZWFmOGVmMrN9cww=: 00:12:38.756 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:12:38.756 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:12:38.756 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:12:38.756 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:12:38.756 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:12:38.756 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:12:38.756 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:12:38.756 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:38.756 09:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:38.756 09:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:12:38.756 09:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:38.756 09:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:12:38.756 09:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:12:38.756 09:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:38.756 09:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:12:38.756 09:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:38.756 09:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:12:38.756 09:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:38.756 09:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:39.323 request: 00:12:39.323 { 00:12:39.323 "name": "nvme0", 00:12:39.323 "trtype": "tcp", 00:12:39.323 "traddr": "10.0.0.3", 00:12:39.323 "adrfam": "ipv4", 00:12:39.323 "trsvcid": "4420", 00:12:39.323 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:39.323 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f", 00:12:39.323 "prchk_reftag": false, 00:12:39.323 "prchk_guard": false, 00:12:39.323 "hdgst": false, 00:12:39.323 "ddgst": false, 00:12:39.323 "dhchap_key": "key1", 00:12:39.323 "allow_unrecognized_csi": false, 00:12:39.323 "method": "bdev_nvme_attach_controller", 00:12:39.323 "req_id": 1 00:12:39.323 } 00:12:39.323 Got JSON-RPC error response 00:12:39.323 response: 00:12:39.323 { 00:12:39.323 "code": -5, 00:12:39.323 "message": "Input/output error" 00:12:39.323 } 00:12:39.323 09:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:39.323 09:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:39.323 09:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:39.323 09:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:39.323 09:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:39.323 09:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:39.323 09:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:40.257 nvme0n1 00:12:40.257 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:12:40.257 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:12:40.258 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:40.516 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:40.516 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:40.516 09:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:40.777 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:12:40.777 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.777 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.777 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.777 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:12:40.777 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:12:40.777 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:12:41.035 nvme0n1 00:12:41.035 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:12:41.035 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:41.035 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:12:41.293 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:41.293 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:41.293 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:41.552 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:41.552 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.552 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.552 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.552 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YWQ0MDhjMDczNDdmMzk4NTYzN2Y1M2IwZjAyNDYyZjCKlYWr: '' 2s 00:12:41.552 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:12:41.552 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:12:41.552 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YWQ0MDhjMDczNDdmMzk4NTYzN2Y1M2IwZjAyNDYyZjCKlYWr: 00:12:41.552 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:12:41.552 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:12:41.552 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:12:41.552 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YWQ0MDhjMDczNDdmMzk4NTYzN2Y1M2IwZjAyNDYyZjCKlYWr: ]] 00:12:41.552 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YWQ0MDhjMDczNDdmMzk4NTYzN2Y1M2IwZjAyNDYyZjCKlYWr: 00:12:41.552 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:12:41.552 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:12:41.552 09:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:12:44.091 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:12:44.091 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:12:44.091 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:12:44.091 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:12:44.091 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:12:44.091 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:12:44.091 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:12:44.091 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key1 --dhchap-ctrlr-key key2 00:12:44.091 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.091 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.091 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.091 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YWRjNjI3ODhkYTMzZGYyZTg3ZTViMGNlNjE0MTdhODdhMDA5YWY2NmM2YzY0NmFiEctxFQ==: 2s 00:12:44.091 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:12:44.091 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:12:44.091 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:12:44.091 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YWRjNjI3ODhkYTMzZGYyZTg3ZTViMGNlNjE0MTdhODdhMDA5YWY2NmM2YzY0NmFiEctxFQ==: 00:12:44.091 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:12:44.091 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:12:44.091 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:12:44.091 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YWRjNjI3ODhkYTMzZGYyZTg3ZTViMGNlNjE0MTdhODdhMDA5YWY2NmM2YzY0NmFiEctxFQ==: ]] 00:12:44.091 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YWRjNjI3ODhkYTMzZGYyZTg3ZTViMGNlNjE0MTdhODdhMDA5YWY2NmM2YzY0NmFiEctxFQ==: 00:12:44.091 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:12:44.091 09:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:12:45.997 09:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:12:45.997 09:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:12:45.997 09:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:12:45.997 09:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:12:45.997 09:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:12:45.997 09:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:12:45.997 09:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:12:45.997 09:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:45.997 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:45.997 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:45.997 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.997 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.997 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.997 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:45.997 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:45.997 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:46.565 nvme0n1 00:12:46.565 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:46.565 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.565 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.565 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.565 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:46.565 09:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:47.130 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:12:47.130 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:47.130 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:12:47.387 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:47.387 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:12:47.387 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.387 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.387 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.387 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:12:47.387 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:12:47.645 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:12:47.645 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:47.645 09:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:12:47.904 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:47.904 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:47.904 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.904 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.904 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.904 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:47.904 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:47.904 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:47.904 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:12:47.904 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:47.904 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:12:47.904 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:47.904 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:47.904 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:48.472 request: 00:12:48.472 { 00:12:48.472 "name": "nvme0", 00:12:48.472 "dhchap_key": "key1", 00:12:48.472 "dhchap_ctrlr_key": "key3", 00:12:48.472 "method": "bdev_nvme_set_keys", 00:12:48.472 "req_id": 1 00:12:48.472 } 00:12:48.472 Got JSON-RPC error response 00:12:48.472 response: 00:12:48.472 { 00:12:48.472 "code": -13, 00:12:48.472 "message": "Permission denied" 00:12:48.472 } 00:12:48.472 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:48.472 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:48.472 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:48.472 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:48.472 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:12:48.472 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:48.472 09:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:12:48.730 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:12:48.730 09:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:12:49.668 09:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:12:49.668 09:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:49.668 09:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:12:49.926 09:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:12:49.926 09:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:49.926 09:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.926 09:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.926 09:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.926 09:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:49.926 09:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:49.926 09:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:50.862 nvme0n1 00:12:50.862 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:50.862 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.862 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.862 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.862 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:12:50.862 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:50.862 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:12:50.862 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:12:50.862 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:50.862 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:12:50.862 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:50.862 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:12:50.862 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:12:51.448 request: 00:12:51.448 { 00:12:51.448 "name": "nvme0", 00:12:51.448 "dhchap_key": "key2", 00:12:51.448 "dhchap_ctrlr_key": "key0", 00:12:51.448 "method": "bdev_nvme_set_keys", 00:12:51.448 "req_id": 1 00:12:51.448 } 00:12:51.448 Got JSON-RPC error response 00:12:51.448 response: 00:12:51.448 { 00:12:51.448 "code": -13, 00:12:51.448 "message": "Permission denied" 00:12:51.448 } 00:12:51.448 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:51.448 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:51.448 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:51.448 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:51.448 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:12:51.448 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:12:51.448 09:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:51.708 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:12:51.708 09:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:12:53.083 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:12:53.083 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:12:53.084 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:53.084 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:12:53.084 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:12:53.084 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:12:53.084 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 67268 00:12:53.084 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 67268 ']' 00:12:53.084 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 67268 00:12:53.084 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:12:53.084 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:53.084 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67268 00:12:53.084 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:12:53.084 killing process with pid 67268 00:12:53.084 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:12:53.084 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67268' 00:12:53.084 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 67268 00:12:53.084 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 67268 00:12:53.652 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:12:53.652 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:53.652 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:12:53.652 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:53.652 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:12:53.652 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:53.652 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:53.652 rmmod nvme_tcp 00:12:53.652 rmmod nvme_fabrics 00:12:53.652 rmmod nvme_keyring 00:12:53.652 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:53.652 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:12:53.652 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:12:53.652 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@515 -- # '[' -n 70186 ']' 00:12:53.652 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # killprocess 70186 00:12:53.652 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 70186 ']' 00:12:53.652 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 70186 00:12:53.652 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:12:53.652 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:53.652 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70186 00:12:53.652 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:53.652 killing process with pid 70186 00:12:53.652 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:53.652 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70186' 00:12:53.652 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 70186 00:12:53.652 09:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 70186 00:12:53.911 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:53.911 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:53.911 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:53.911 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:12:53.911 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-save 00:12:53.911 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:53.911 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-restore 00:12:53.911 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:53.911 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:53.911 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:53.911 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:53.911 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:53.911 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:53.911 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:53.911 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:53.911 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:53.911 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:53.911 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:53.911 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:53.911 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:53.911 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:53.911 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:53.911 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:53.911 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:53.911 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:53.911 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.170 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:12:54.170 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.c0z /tmp/spdk.key-sha256.LeF /tmp/spdk.key-sha384.rwW /tmp/spdk.key-sha512.lDX /tmp/spdk.key-sha512.USw /tmp/spdk.key-sha384.IDP /tmp/spdk.key-sha256.Gnc '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:12:54.170 00:12:54.170 real 2m56.993s 00:12:54.170 user 7m5.016s 00:12:54.170 sys 0m27.213s 00:12:54.170 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:54.170 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.170 ************************************ 00:12:54.170 END TEST nvmf_auth_target 00:12:54.170 ************************************ 00:12:54.170 09:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:12:54.170 09:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:12:54.170 09:27:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:54.170 09:27:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:54.170 09:27:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:54.170 ************************************ 00:12:54.170 START TEST nvmf_bdevio_no_huge 00:12:54.170 ************************************ 00:12:54.171 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:12:54.171 * Looking for test storage... 00:12:54.171 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:54.171 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:54.171 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:12:54.171 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:54.171 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:54.171 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:54.171 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:54.171 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:54.171 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:12:54.171 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:12:54.171 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:12:54.171 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:12:54.171 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:12:54.171 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:12:54.171 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:12:54.171 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:54.171 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:12:54.171 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:12:54.171 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:54.171 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:54.171 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:12:54.171 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:12:54.171 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:54.171 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:12:54.171 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:12:54.171 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:12:54.171 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:12:54.171 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:54.171 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:12:54.171 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:12:54.171 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:54.171 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:54.171 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:12:54.171 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:54.171 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:54.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.171 --rc genhtml_branch_coverage=1 00:12:54.171 --rc genhtml_function_coverage=1 00:12:54.171 --rc genhtml_legend=1 00:12:54.171 --rc geninfo_all_blocks=1 00:12:54.171 --rc geninfo_unexecuted_blocks=1 00:12:54.171 00:12:54.171 ' 00:12:54.171 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:54.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.171 --rc genhtml_branch_coverage=1 00:12:54.171 --rc genhtml_function_coverage=1 00:12:54.171 --rc genhtml_legend=1 00:12:54.171 --rc geninfo_all_blocks=1 00:12:54.171 --rc geninfo_unexecuted_blocks=1 00:12:54.171 00:12:54.171 ' 00:12:54.171 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:54.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.171 --rc genhtml_branch_coverage=1 00:12:54.171 --rc genhtml_function_coverage=1 00:12:54.171 --rc genhtml_legend=1 00:12:54.171 --rc geninfo_all_blocks=1 00:12:54.171 --rc geninfo_unexecuted_blocks=1 00:12:54.171 00:12:54.171 ' 00:12:54.171 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:54.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.171 --rc genhtml_branch_coverage=1 00:12:54.171 --rc genhtml_function_coverage=1 00:12:54.171 --rc genhtml_legend=1 00:12:54.171 --rc geninfo_all_blocks=1 00:12:54.171 --rc geninfo_unexecuted_blocks=1 00:12:54.171 00:12:54.171 ' 00:12:54.171 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:54.171 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:12:54.171 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:54.171 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:54.171 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:54.171 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:54.171 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:54.171 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:54.171 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:54.171 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:54.171 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:54.171 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:54.431 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:12:54.431 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5989d9e2-d339-420e-a2f4-bd87604f111f 00:12:54.431 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:54.431 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:54.431 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:54.431 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:54.431 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:54.431 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:12:54.431 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:54.431 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:54.431 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:54.431 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.431 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.431 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.431 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:12:54.431 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.431 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:12:54.431 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:54.431 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:54.431 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:54.431 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:54.431 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:54.431 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:54.431 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:54.431 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:54.431 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:54.431 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:54.431 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:54.431 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:54.431 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:12:54.431 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:54.431 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:54.431 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:54.431 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:54.431 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:54.431 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:54.431 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:54.431 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.431 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:12:54.431 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:12:54.431 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:12:54.431 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:12:54.432 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:12:54.432 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@458 -- # nvmf_veth_init 00:12:54.432 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:54.432 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:54.432 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:54.432 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:54.432 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:54.432 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:54.432 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:54.432 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:54.432 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:54.432 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:54.432 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:54.432 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:54.432 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:54.432 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:54.432 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:54.432 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:54.432 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:54.432 Cannot find device "nvmf_init_br" 00:12:54.432 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:12:54.432 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:54.432 Cannot find device "nvmf_init_br2" 00:12:54.432 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:12:54.432 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:54.432 Cannot find device "nvmf_tgt_br" 00:12:54.432 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:12:54.432 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:54.432 Cannot find device "nvmf_tgt_br2" 00:12:54.432 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:12:54.432 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:54.432 Cannot find device "nvmf_init_br" 00:12:54.432 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:12:54.432 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:54.432 Cannot find device "nvmf_init_br2" 00:12:54.432 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:12:54.432 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:54.432 Cannot find device "nvmf_tgt_br" 00:12:54.432 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:12:54.432 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:54.432 Cannot find device "nvmf_tgt_br2" 00:12:54.432 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:12:54.432 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:54.432 Cannot find device "nvmf_br" 00:12:54.432 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:12:54.432 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:54.432 Cannot find device "nvmf_init_if" 00:12:54.432 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:12:54.432 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:54.432 Cannot find device "nvmf_init_if2" 00:12:54.432 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:12:54.432 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:54.432 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:54.432 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:12:54.432 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:54.432 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:54.432 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:12:54.432 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:54.432 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:54.432 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:54.432 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:54.432 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:54.432 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:54.432 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:54.432 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:54.432 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:54.432 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:54.691 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:54.691 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:54.691 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:54.691 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:54.692 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:54.692 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:54.692 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:54.692 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:54.692 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:54.692 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:54.692 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:54.692 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:54.692 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:54.692 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:54.692 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:54.692 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:54.692 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:54.692 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:54.692 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:54.692 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:54.692 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:54.692 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:54.692 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:54.692 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:54.692 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.108 ms 00:12:54.692 00:12:54.692 --- 10.0.0.3 ping statistics --- 00:12:54.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.692 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:12:54.692 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:54.692 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:54.692 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:12:54.692 00:12:54.692 --- 10.0.0.4 ping statistics --- 00:12:54.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.692 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:12:54.692 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:54.692 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:54.692 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:12:54.692 00:12:54.692 --- 10.0.0.1 ping statistics --- 00:12:54.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.692 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:12:54.692 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:54.692 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:54.692 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:12:54.692 00:12:54.692 --- 10.0.0.2 ping statistics --- 00:12:54.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.692 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:12:54.692 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:54.692 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # return 0 00:12:54.692 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:54.692 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:54.692 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:54.692 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:54.692 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:54.692 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:54.692 09:27:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:54.692 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:54.692 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:54.692 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:54.692 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:54.692 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # nvmfpid=70825 00:12:54.692 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:12:54.692 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # waitforlisten 70825 00:12:54.692 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 70825 ']' 00:12:54.692 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.692 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:54.692 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.692 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:54.692 09:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:54.692 [2024-10-16 09:27:19.082989] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:12:54.692 [2024-10-16 09:27:19.083087] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:12:54.951 [2024-10-16 09:27:19.232139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:54.951 [2024-10-16 09:27:19.315756] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:54.951 [2024-10-16 09:27:19.315809] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:54.951 [2024-10-16 09:27:19.315825] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:54.951 [2024-10-16 09:27:19.315835] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:54.951 [2024-10-16 09:27:19.315845] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:54.951 [2024-10-16 09:27:19.316493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:54.951 [2024-10-16 09:27:19.316619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:12:54.951 [2024-10-16 09:27:19.316693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:54.951 [2024-10-16 09:27:19.316693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:12:54.951 [2024-10-16 09:27:19.323027] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:55.888 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:55.888 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:12:55.888 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:55.888 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:55.888 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:55.888 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:55.888 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:55.888 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.888 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:55.888 [2024-10-16 09:27:20.142871] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:55.888 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.888 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:55.888 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.888 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:55.888 Malloc0 00:12:55.888 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.888 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:55.888 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.888 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:55.888 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.888 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:55.888 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.888 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:55.888 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.888 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:55.888 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.888 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:55.888 [2024-10-16 09:27:20.187044] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:55.888 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.888 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:12:55.888 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:55.888 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # config=() 00:12:55.888 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # local subsystem config 00:12:55.888 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:12:55.888 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:12:55.888 { 00:12:55.888 "params": { 00:12:55.888 "name": "Nvme$subsystem", 00:12:55.888 "trtype": "$TEST_TRANSPORT", 00:12:55.888 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:55.888 "adrfam": "ipv4", 00:12:55.888 "trsvcid": "$NVMF_PORT", 00:12:55.888 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:55.888 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:55.888 "hdgst": ${hdgst:-false}, 00:12:55.888 "ddgst": ${ddgst:-false} 00:12:55.888 }, 00:12:55.888 "method": "bdev_nvme_attach_controller" 00:12:55.888 } 00:12:55.888 EOF 00:12:55.888 )") 00:12:55.888 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # cat 00:12:55.888 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # jq . 00:12:55.888 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@583 -- # IFS=, 00:12:55.888 09:27:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:12:55.888 "params": { 00:12:55.888 "name": "Nvme1", 00:12:55.888 "trtype": "tcp", 00:12:55.888 "traddr": "10.0.0.3", 00:12:55.888 "adrfam": "ipv4", 00:12:55.888 "trsvcid": "4420", 00:12:55.888 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:55.888 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:55.888 "hdgst": false, 00:12:55.888 "ddgst": false 00:12:55.888 }, 00:12:55.888 "method": "bdev_nvme_attach_controller" 00:12:55.888 }' 00:12:55.888 [2024-10-16 09:27:20.250294] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:12:55.888 [2024-10-16 09:27:20.250399] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid70861 ] 00:12:56.147 [2024-10-16 09:27:20.397399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:56.147 [2024-10-16 09:27:20.482307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:56.147 [2024-10-16 09:27:20.482446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:56.147 [2024-10-16 09:27:20.482780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.147 [2024-10-16 09:27:20.497685] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:56.406 I/O targets: 00:12:56.406 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:56.406 00:12:56.406 00:12:56.406 CUnit - A unit testing framework for C - Version 2.1-3 00:12:56.406 http://cunit.sourceforge.net/ 00:12:56.406 00:12:56.406 00:12:56.406 Suite: bdevio tests on: Nvme1n1 00:12:56.406 Test: blockdev write read block ...passed 00:12:56.406 Test: blockdev write zeroes read block ...passed 00:12:56.406 Test: blockdev write zeroes read no split ...passed 00:12:56.406 Test: blockdev write zeroes read split ...passed 00:12:56.406 Test: blockdev write zeroes read split partial ...passed 00:12:56.406 Test: blockdev reset ...[2024-10-16 09:27:20.738511] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:12:56.406 [2024-10-16 09:27:20.738659] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430720 (9): Bad file descriptor 00:12:56.406 [2024-10-16 09:27:20.755265] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:56.406 passed 00:12:56.406 Test: blockdev write read 8 blocks ...passed 00:12:56.406 Test: blockdev write read size > 128k ...passed 00:12:56.406 Test: blockdev write read invalid size ...passed 00:12:56.406 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:56.406 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:56.406 Test: blockdev write read max offset ...passed 00:12:56.406 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:56.406 Test: blockdev writev readv 8 blocks ...passed 00:12:56.406 Test: blockdev writev readv 30 x 1block ...passed 00:12:56.406 Test: blockdev writev readv block ...passed 00:12:56.406 Test: blockdev writev readv size > 128k ...passed 00:12:56.406 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:56.406 Test: blockdev comparev and writev ...[2024-10-16 09:27:20.762981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:56.406 [2024-10-16 09:27:20.763054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:56.406 [2024-10-16 09:27:20.763090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:56.406 [2024-10-16 09:27:20.763101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:56.406 [2024-10-16 09:27:20.763395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:56.406 [2024-10-16 09:27:20.763420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:56.406 [2024-10-16 09:27:20.763437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:56.406 [2024-10-16 09:27:20.763447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:56.406 [2024-10-16 09:27:20.763889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:56.406 [2024-10-16 09:27:20.763919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:56.406 [2024-10-16 09:27:20.763936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:56.406 [2024-10-16 09:27:20.763947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:56.406 [2024-10-16 09:27:20.764234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:56.406 [2024-10-16 09:27:20.764258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:56.406 [2024-10-16 09:27:20.764275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:56.406 [2024-10-16 09:27:20.764285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:56.406 passed 00:12:56.406 Test: blockdev nvme passthru rw ...passed 00:12:56.406 Test: blockdev nvme passthru vendor specific ...[2024-10-16 09:27:20.765115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:56.406 [2024-10-16 09:27:20.765142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:56.406 [2024-10-16 09:27:20.765249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:56.406 [2024-10-16 09:27:20.765270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:56.406 [2024-10-16 09:27:20.765364] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:56.406 [2024-10-16 09:27:20.765388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:56.406 [2024-10-16 09:27:20.765489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:56.406 [2024-10-16 09:27:20.765512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:56.406 passed 00:12:56.406 Test: blockdev nvme admin passthru ...passed 00:12:56.406 Test: blockdev copy ...passed 00:12:56.406 00:12:56.406 Run Summary: Type Total Ran Passed Failed Inactive 00:12:56.406 suites 1 1 n/a 0 0 00:12:56.406 tests 23 23 23 0 0 00:12:56.406 asserts 152 152 152 0 n/a 00:12:56.406 00:12:56.406 Elapsed time = 0.165 seconds 00:12:56.974 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:56.974 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.974 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:56.974 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.974 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:56.974 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:12:56.974 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:56.974 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:12:56.974 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:56.974 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:12:56.974 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:56.974 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:56.974 rmmod nvme_tcp 00:12:56.974 rmmod nvme_fabrics 00:12:56.974 rmmod nvme_keyring 00:12:56.974 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:56.974 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:12:56.975 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:12:56.975 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@515 -- # '[' -n 70825 ']' 00:12:56.975 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # killprocess 70825 00:12:56.975 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 70825 ']' 00:12:56.975 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 70825 00:12:56.975 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:12:56.975 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:56.975 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70825 00:12:56.975 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:12:56.975 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:12:56.975 killing process with pid 70825 00:12:56.975 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70825' 00:12:56.975 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 70825 00:12:56.975 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 70825 00:12:57.233 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:57.233 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:57.233 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:57.233 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:12:57.233 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-save 00:12:57.233 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:57.233 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-restore 00:12:57.233 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:57.233 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:57.233 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:57.233 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:57.233 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:57.493 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:57.493 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:57.493 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:57.493 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:57.493 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:57.493 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:57.493 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:57.493 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:57.493 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:57.493 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:57.493 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:57.493 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.493 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:57.493 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:57.493 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:12:57.493 00:12:57.493 real 0m3.449s 00:12:57.493 user 0m10.556s 00:12:57.493 sys 0m1.364s 00:12:57.493 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:57.493 ************************************ 00:12:57.493 END TEST nvmf_bdevio_no_huge 00:12:57.493 ************************************ 00:12:57.493 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:57.493 09:27:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:12:57.493 09:27:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:57.493 09:27:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:57.493 09:27:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:57.493 ************************************ 00:12:57.493 START TEST nvmf_tls 00:12:57.493 ************************************ 00:12:57.493 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:12:57.753 * Looking for test storage... 00:12:57.753 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:57.753 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:57.753 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:12:57.753 09:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:57.753 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:57.753 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:57.753 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:57.753 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:57.753 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:12:57.753 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:12:57.753 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:12:57.753 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:12:57.753 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:12:57.753 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:12:57.753 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:12:57.753 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:57.753 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:12:57.753 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:12:57.753 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:57.753 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:57.753 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:12:57.753 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:12:57.753 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:57.753 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:12:57.753 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:12:57.753 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:57.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.754 --rc genhtml_branch_coverage=1 00:12:57.754 --rc genhtml_function_coverage=1 00:12:57.754 --rc genhtml_legend=1 00:12:57.754 --rc geninfo_all_blocks=1 00:12:57.754 --rc geninfo_unexecuted_blocks=1 00:12:57.754 00:12:57.754 ' 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:57.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.754 --rc genhtml_branch_coverage=1 00:12:57.754 --rc genhtml_function_coverage=1 00:12:57.754 --rc genhtml_legend=1 00:12:57.754 --rc geninfo_all_blocks=1 00:12:57.754 --rc geninfo_unexecuted_blocks=1 00:12:57.754 00:12:57.754 ' 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:57.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.754 --rc genhtml_branch_coverage=1 00:12:57.754 --rc genhtml_function_coverage=1 00:12:57.754 --rc genhtml_legend=1 00:12:57.754 --rc geninfo_all_blocks=1 00:12:57.754 --rc geninfo_unexecuted_blocks=1 00:12:57.754 00:12:57.754 ' 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:57.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.754 --rc genhtml_branch_coverage=1 00:12:57.754 --rc genhtml_function_coverage=1 00:12:57.754 --rc genhtml_legend=1 00:12:57.754 --rc geninfo_all_blocks=1 00:12:57.754 --rc geninfo_unexecuted_blocks=1 00:12:57.754 00:12:57.754 ' 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5989d9e2-d339-420e-a2f4-bd87604f111f 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:57.754 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@458 -- # nvmf_veth_init 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:57.754 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:57.755 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:57.755 Cannot find device "nvmf_init_br" 00:12:57.755 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:12:57.755 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:57.755 Cannot find device "nvmf_init_br2" 00:12:57.755 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:12:57.755 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:57.755 Cannot find device "nvmf_tgt_br" 00:12:57.755 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:12:57.755 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:57.755 Cannot find device "nvmf_tgt_br2" 00:12:57.755 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:12:57.755 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:57.755 Cannot find device "nvmf_init_br" 00:12:57.755 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:12:57.755 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:58.013 Cannot find device "nvmf_init_br2" 00:12:58.013 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:12:58.013 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:58.013 Cannot find device "nvmf_tgt_br" 00:12:58.013 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:12:58.013 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:58.013 Cannot find device "nvmf_tgt_br2" 00:12:58.013 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:12:58.013 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:58.013 Cannot find device "nvmf_br" 00:12:58.013 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:12:58.013 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:58.013 Cannot find device "nvmf_init_if" 00:12:58.013 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:12:58.013 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:58.013 Cannot find device "nvmf_init_if2" 00:12:58.013 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:12:58.013 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:58.013 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:58.013 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:12:58.013 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:58.013 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:58.013 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:12:58.013 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:58.013 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:58.013 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:58.013 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:58.013 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:58.013 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:58.013 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:58.013 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:58.013 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:58.013 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:58.013 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:58.013 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:58.013 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:58.013 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:58.013 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:58.013 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:58.013 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:58.014 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:58.014 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:58.014 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:58.014 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:58.014 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:58.014 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:58.014 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:58.272 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:58.272 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:58.272 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:58.272 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:58.272 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:58.272 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:58.272 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:58.272 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:58.272 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:58.272 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:58.272 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:12:58.272 00:12:58.272 --- 10.0.0.3 ping statistics --- 00:12:58.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:58.272 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:12:58.272 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:58.272 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:58.272 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:12:58.272 00:12:58.272 --- 10.0.0.4 ping statistics --- 00:12:58.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:58.272 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:12:58.272 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:58.272 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:58.272 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:12:58.272 00:12:58.272 --- 10.0.0.1 ping statistics --- 00:12:58.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:58.272 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:12:58.272 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:58.272 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:58.272 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:12:58.272 00:12:58.272 --- 10.0.0.2 ping statistics --- 00:12:58.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:58.272 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:12:58.272 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:58.272 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # return 0 00:12:58.272 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:58.272 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:58.272 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:58.272 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:58.272 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:58.272 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:58.272 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:58.272 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:12:58.272 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:58.272 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:58.272 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:58.273 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=71100 00:12:58.273 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:12:58.273 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 71100 00:12:58.273 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 71100 ']' 00:12:58.273 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.273 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:58.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.273 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.273 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:58.273 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:58.273 [2024-10-16 09:27:22.562631] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:12:58.273 [2024-10-16 09:27:22.562718] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:58.531 [2024-10-16 09:27:22.704471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.531 [2024-10-16 09:27:22.759091] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:58.531 [2024-10-16 09:27:22.759152] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:58.531 [2024-10-16 09:27:22.759166] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:58.531 [2024-10-16 09:27:22.759176] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:58.531 [2024-10-16 09:27:22.759185] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:58.531 [2024-10-16 09:27:22.759649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:58.531 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:58.531 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:12:58.531 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:58.531 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:58.531 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:58.531 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:58.531 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:12:58.531 09:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:12:58.789 true 00:12:58.789 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:12:58.789 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:59.356 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:12:59.356 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:12:59.356 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:12:59.356 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:59.356 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:12:59.654 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:12:59.654 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:12:59.654 09:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:12:59.922 09:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:59.922 09:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:13:00.180 09:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:13:00.180 09:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:13:00.180 09:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:00.180 09:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:13:00.439 09:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:13:00.439 09:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:13:00.439 09:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:13:00.697 09:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:00.697 09:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:13:00.697 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:13:00.697 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:13:00.697 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:13:00.956 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:00.956 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:13:01.215 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:13:01.215 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:13:01.215 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:13:01.215 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:13:01.215 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:13:01.215 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:13:01.215 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:13:01.215 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:13:01.215 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:13:01.215 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:01.215 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:13:01.215 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:13:01.215 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:13:01.215 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:13:01.215 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=ffeeddccbbaa99887766554433221100 00:13:01.215 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:13:01.215 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:13:01.474 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:01.474 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:13:01.474 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.YnIbv7tMwP 00:13:01.474 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:13:01.474 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.mlqQ0o1yUY 00:13:01.474 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:01.474 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:01.474 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.YnIbv7tMwP 00:13:01.474 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.mlqQ0o1yUY 00:13:01.474 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:01.733 09:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:13:01.992 [2024-10-16 09:27:26.226532] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:01.992 09:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.YnIbv7tMwP 00:13:01.993 09:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.YnIbv7tMwP 00:13:01.993 09:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:02.251 [2024-10-16 09:27:26.482372] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:02.251 09:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:02.511 09:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:02.511 [2024-10-16 09:27:26.894420] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:02.511 [2024-10-16 09:27:26.894629] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:02.511 09:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:02.769 malloc0 00:13:02.769 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:03.030 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.YnIbv7tMwP 00:13:03.290 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:03.549 09:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.YnIbv7tMwP 00:13:15.756 Initializing NVMe Controllers 00:13:15.756 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:13:15.756 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:15.756 Initialization complete. Launching workers. 00:13:15.756 ======================================================== 00:13:15.756 Latency(us) 00:13:15.756 Device Information : IOPS MiB/s Average min max 00:13:15.756 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11223.48 43.84 5703.37 1527.95 8543.65 00:13:15.756 ======================================================== 00:13:15.756 Total : 11223.48 43.84 5703.37 1527.95 8543.65 00:13:15.756 00:13:15.756 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.YnIbv7tMwP 00:13:15.756 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:15.756 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:15.756 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:15.756 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.YnIbv7tMwP 00:13:15.756 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:15.756 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71329 00:13:15.756 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:15.756 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:15.756 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71329 /var/tmp/bdevperf.sock 00:13:15.756 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 71329 ']' 00:13:15.756 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:15.756 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:15.756 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:15.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:15.756 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:15.756 09:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:15.756 [2024-10-16 09:27:38.078402] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:13:15.756 [2024-10-16 09:27:38.078503] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71329 ] 00:13:15.756 [2024-10-16 09:27:38.217995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:15.756 [2024-10-16 09:27:38.272120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:15.756 [2024-10-16 09:27:38.329096] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:15.756 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:15.756 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:15.756 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.YnIbv7tMwP 00:13:15.756 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:15.756 [2024-10-16 09:27:39.532527] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:15.756 TLSTESTn1 00:13:15.756 09:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:15.756 Running I/O for 10 seconds... 00:13:17.701 4608.00 IOPS, 18.00 MiB/s [2024-10-16T09:27:43.042Z] 4656.00 IOPS, 18.19 MiB/s [2024-10-16T09:27:43.979Z] 4650.67 IOPS, 18.17 MiB/s [2024-10-16T09:27:44.914Z] 4693.75 IOPS, 18.33 MiB/s [2024-10-16T09:27:45.851Z] 4729.40 IOPS, 18.47 MiB/s [2024-10-16T09:27:46.789Z] 4746.83 IOPS, 18.54 MiB/s [2024-10-16T09:27:47.724Z] 4761.86 IOPS, 18.60 MiB/s [2024-10-16T09:27:49.100Z] 4775.12 IOPS, 18.65 MiB/s [2024-10-16T09:27:50.036Z] 4780.56 IOPS, 18.67 MiB/s [2024-10-16T09:27:50.036Z] 4790.20 IOPS, 18.71 MiB/s 00:13:25.632 Latency(us) 00:13:25.632 [2024-10-16T09:27:50.036Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:25.632 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:25.632 Verification LBA range: start 0x0 length 0x2000 00:13:25.632 TLSTESTn1 : 10.01 4795.90 18.73 0.00 0.00 26644.84 4885.41 21686.46 00:13:25.632 [2024-10-16T09:27:50.036Z] =================================================================================================================== 00:13:25.632 [2024-10-16T09:27:50.036Z] Total : 4795.90 18.73 0.00 0.00 26644.84 4885.41 21686.46 00:13:25.632 { 00:13:25.632 "results": [ 00:13:25.632 { 00:13:25.632 "job": "TLSTESTn1", 00:13:25.632 "core_mask": "0x4", 00:13:25.632 "workload": "verify", 00:13:25.632 "status": "finished", 00:13:25.632 "verify_range": { 00:13:25.632 "start": 0, 00:13:25.632 "length": 8192 00:13:25.632 }, 00:13:25.632 "queue_depth": 128, 00:13:25.632 "io_size": 4096, 00:13:25.632 "runtime": 10.01439, 00:13:25.632 "iops": 4795.898701768156, 00:13:25.632 "mibps": 18.733979303781858, 00:13:25.632 "io_failed": 0, 00:13:25.632 "io_timeout": 0, 00:13:25.632 "avg_latency_us": 26644.839586604783, 00:13:25.632 "min_latency_us": 4885.410909090909, 00:13:25.632 "max_latency_us": 21686.458181818183 00:13:25.632 } 00:13:25.632 ], 00:13:25.632 "core_count": 1 00:13:25.632 } 00:13:25.632 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:25.632 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71329 00:13:25.632 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 71329 ']' 00:13:25.632 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 71329 00:13:25.632 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:25.632 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:25.632 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71329 00:13:25.632 killing process with pid 71329 00:13:25.632 Received shutdown signal, test time was about 10.000000 seconds 00:13:25.632 00:13:25.632 Latency(us) 00:13:25.632 [2024-10-16T09:27:50.037Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:25.633 [2024-10-16T09:27:50.037Z] =================================================================================================================== 00:13:25.633 [2024-10-16T09:27:50.037Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:25.633 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:25.633 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:25.633 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71329' 00:13:25.633 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 71329 00:13:25.633 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 71329 00:13:25.633 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.mlqQ0o1yUY 00:13:25.633 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:25.633 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.mlqQ0o1yUY 00:13:25.633 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:25.633 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:25.633 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:25.633 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:25.633 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.mlqQ0o1yUY 00:13:25.633 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:25.633 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:25.633 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:25.633 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.mlqQ0o1yUY 00:13:25.633 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:25.633 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71469 00:13:25.633 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:25.633 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:25.633 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71469 /var/tmp/bdevperf.sock 00:13:25.633 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 71469 ']' 00:13:25.633 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:25.633 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:25.633 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:25.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:25.633 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:25.633 09:27:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:25.633 [2024-10-16 09:27:50.024557] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:13:25.633 [2024-10-16 09:27:50.024821] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71469 ] 00:13:25.892 [2024-10-16 09:27:50.162200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.892 [2024-10-16 09:27:50.205514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:25.892 [2024-10-16 09:27:50.257266] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:26.150 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:26.151 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:26.151 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.mlqQ0o1yUY 00:13:26.409 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:26.668 [2024-10-16 09:27:50.865633] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:26.668 [2024-10-16 09:27:50.870517] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:26.668 [2024-10-16 09:27:50.871190] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10e4090 (107): Transport endpoint is not connected 00:13:26.668 [2024-10-16 09:27:50.872176] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10e4090 (9): Bad file descriptor 00:13:26.668 [2024-10-16 09:27:50.873173] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:26.668 [2024-10-16 09:27:50.873199] nvme.c: 721:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:13:26.668 [2024-10-16 09:27:50.873210] nvme.c: 897:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:13:26.668 [2024-10-16 09:27:50.873240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:26.668 request: 00:13:26.668 { 00:13:26.668 "name": "TLSTEST", 00:13:26.668 "trtype": "tcp", 00:13:26.668 "traddr": "10.0.0.3", 00:13:26.668 "adrfam": "ipv4", 00:13:26.668 "trsvcid": "4420", 00:13:26.668 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:26.668 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:26.668 "prchk_reftag": false, 00:13:26.668 "prchk_guard": false, 00:13:26.668 "hdgst": false, 00:13:26.668 "ddgst": false, 00:13:26.668 "psk": "key0", 00:13:26.668 "allow_unrecognized_csi": false, 00:13:26.669 "method": "bdev_nvme_attach_controller", 00:13:26.669 "req_id": 1 00:13:26.669 } 00:13:26.669 Got JSON-RPC error response 00:13:26.669 response: 00:13:26.669 { 00:13:26.669 "code": -5, 00:13:26.669 "message": "Input/output error" 00:13:26.669 } 00:13:26.669 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71469 00:13:26.669 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 71469 ']' 00:13:26.669 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 71469 00:13:26.669 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:26.669 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:26.669 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71469 00:13:26.669 killing process with pid 71469 00:13:26.669 Received shutdown signal, test time was about 10.000000 seconds 00:13:26.669 00:13:26.669 Latency(us) 00:13:26.669 [2024-10-16T09:27:51.073Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:26.669 [2024-10-16T09:27:51.073Z] =================================================================================================================== 00:13:26.669 [2024-10-16T09:27:51.073Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:26.669 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:26.669 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:26.669 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71469' 00:13:26.669 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 71469 00:13:26.669 09:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 71469 00:13:26.928 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:26.928 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:26.928 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:26.928 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:26.928 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:26.928 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.YnIbv7tMwP 00:13:26.928 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:26.928 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.YnIbv7tMwP 00:13:26.928 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:26.928 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:26.928 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:26.928 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:26.928 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.YnIbv7tMwP 00:13:26.928 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:26.928 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:26.928 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:13:26.928 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.YnIbv7tMwP 00:13:26.928 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:26.928 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71490 00:13:26.928 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:26.928 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:26.928 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71490 /var/tmp/bdevperf.sock 00:13:26.928 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 71490 ']' 00:13:26.928 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:26.928 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:26.928 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:26.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:26.928 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:26.928 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:26.928 [2024-10-16 09:27:51.142200] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:13:26.928 [2024-10-16 09:27:51.142457] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71490 ] 00:13:26.928 [2024-10-16 09:27:51.273020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.928 [2024-10-16 09:27:51.314201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:27.187 [2024-10-16 09:27:51.366507] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:27.187 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:27.187 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:27.187 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.YnIbv7tMwP 00:13:27.446 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:13:27.705 [2024-10-16 09:27:51.915211] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:27.705 [2024-10-16 09:27:51.920302] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:27.705 [2024-10-16 09:27:51.920502] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:27.705 [2024-10-16 09:27:51.920620] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:27.705 [2024-10-16 09:27:51.921078] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2075090 (107): Transport endpoint is not connected 00:13:27.705 [2024-10-16 09:27:51.922064] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2075090 (9): Bad file descriptor 00:13:27.705 [2024-10-16 09:27:51.923061] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:27.705 [2024-10-16 09:27:51.923083] nvme.c: 721:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:13:27.705 [2024-10-16 09:27:51.923109] nvme.c: 897:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:13:27.705 [2024-10-16 09:27:51.923123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:27.705 request: 00:13:27.705 { 00:13:27.705 "name": "TLSTEST", 00:13:27.705 "trtype": "tcp", 00:13:27.705 "traddr": "10.0.0.3", 00:13:27.705 "adrfam": "ipv4", 00:13:27.705 "trsvcid": "4420", 00:13:27.705 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:27.705 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:13:27.705 "prchk_reftag": false, 00:13:27.705 "prchk_guard": false, 00:13:27.705 "hdgst": false, 00:13:27.705 "ddgst": false, 00:13:27.705 "psk": "key0", 00:13:27.705 "allow_unrecognized_csi": false, 00:13:27.705 "method": "bdev_nvme_attach_controller", 00:13:27.705 "req_id": 1 00:13:27.705 } 00:13:27.705 Got JSON-RPC error response 00:13:27.705 response: 00:13:27.705 { 00:13:27.705 "code": -5, 00:13:27.705 "message": "Input/output error" 00:13:27.705 } 00:13:27.705 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71490 00:13:27.705 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 71490 ']' 00:13:27.705 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 71490 00:13:27.705 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:27.705 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:27.705 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71490 00:13:27.705 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:27.705 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:27.705 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71490' 00:13:27.705 killing process with pid 71490 00:13:27.705 Received shutdown signal, test time was about 10.000000 seconds 00:13:27.705 00:13:27.705 Latency(us) 00:13:27.705 [2024-10-16T09:27:52.109Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:27.705 [2024-10-16T09:27:52.109Z] =================================================================================================================== 00:13:27.705 [2024-10-16T09:27:52.109Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:27.705 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 71490 00:13:27.705 09:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 71490 00:13:27.965 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:27.965 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:27.965 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:27.965 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:27.965 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:27.965 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.YnIbv7tMwP 00:13:27.965 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:27.965 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.YnIbv7tMwP 00:13:27.965 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:27.965 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:27.965 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:27.965 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:27.965 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.YnIbv7tMwP 00:13:27.965 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:27.965 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:13:27.965 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:27.965 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.YnIbv7tMwP 00:13:27.965 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:27.965 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71511 00:13:27.965 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:27.965 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:27.965 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71511 /var/tmp/bdevperf.sock 00:13:27.965 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 71511 ']' 00:13:27.965 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:27.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:27.965 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:27.965 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:27.965 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:27.965 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:27.965 [2024-10-16 09:27:52.207307] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:13:27.965 [2024-10-16 09:27:52.207409] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71511 ] 00:13:27.965 [2024-10-16 09:27:52.339251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.224 [2024-10-16 09:27:52.381946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:28.224 [2024-10-16 09:27:52.433508] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:28.224 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:28.224 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:28.224 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.YnIbv7tMwP 00:13:28.483 09:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:28.742 [2024-10-16 09:27:53.033502] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:28.742 [2024-10-16 09:27:53.039856] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:28.742 [2024-10-16 09:27:53.039924] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:28.742 [2024-10-16 09:27:53.039988] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:28.742 [2024-10-16 09:27:53.040069] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc28090 (107): Transport endpoint is not connected 00:13:28.742 [2024-10-16 09:27:53.041062] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc28090 (9): Bad file descriptor 00:13:28.742 [2024-10-16 09:27:53.042058] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:13:28.742 [2024-10-16 09:27:53.042098] nvme.c: 721:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:13:28.742 [2024-10-16 09:27:53.042125] nvme.c: 897:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:13:28.742 [2024-10-16 09:27:53.042141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:13:28.742 request: 00:13:28.742 { 00:13:28.742 "name": "TLSTEST", 00:13:28.742 "trtype": "tcp", 00:13:28.742 "traddr": "10.0.0.3", 00:13:28.742 "adrfam": "ipv4", 00:13:28.742 "trsvcid": "4420", 00:13:28.742 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:13:28.742 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:28.742 "prchk_reftag": false, 00:13:28.742 "prchk_guard": false, 00:13:28.742 "hdgst": false, 00:13:28.742 "ddgst": false, 00:13:28.742 "psk": "key0", 00:13:28.742 "allow_unrecognized_csi": false, 00:13:28.742 "method": "bdev_nvme_attach_controller", 00:13:28.742 "req_id": 1 00:13:28.742 } 00:13:28.742 Got JSON-RPC error response 00:13:28.742 response: 00:13:28.742 { 00:13:28.742 "code": -5, 00:13:28.742 "message": "Input/output error" 00:13:28.742 } 00:13:28.742 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71511 00:13:28.742 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 71511 ']' 00:13:28.742 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 71511 00:13:28.742 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:28.742 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:28.742 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71511 00:13:28.742 killing process with pid 71511 00:13:28.742 Received shutdown signal, test time was about 10.000000 seconds 00:13:28.742 00:13:28.742 Latency(us) 00:13:28.742 [2024-10-16T09:27:53.146Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:28.742 [2024-10-16T09:27:53.146Z] =================================================================================================================== 00:13:28.742 [2024-10-16T09:27:53.146Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:28.742 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:28.742 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:28.742 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71511' 00:13:28.742 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 71511 00:13:28.742 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 71511 00:13:29.002 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:29.002 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:29.002 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:29.002 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:29.002 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:29.002 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:29.002 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:29.002 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:29.002 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:29.002 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:29.002 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:29.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:29.002 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:29.002 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:29.002 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:29.002 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:29.002 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:29.002 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:13:29.002 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:29.002 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71532 00:13:29.002 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:29.002 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71532 /var/tmp/bdevperf.sock 00:13:29.002 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 71532 ']' 00:13:29.002 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:29.002 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:29.002 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:29.002 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:29.002 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:29.002 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:29.002 [2024-10-16 09:27:53.320612] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:13:29.002 [2024-10-16 09:27:53.321095] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71532 ] 00:13:29.261 [2024-10-16 09:27:53.450622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.261 [2024-10-16 09:27:53.492302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:29.261 [2024-10-16 09:27:53.545275] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:29.261 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:29.261 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:29.261 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:13:29.519 [2024-10-16 09:27:53.872586] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:13:29.519 [2024-10-16 09:27:53.872637] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:13:29.519 request: 00:13:29.519 { 00:13:29.519 "name": "key0", 00:13:29.519 "path": "", 00:13:29.519 "method": "keyring_file_add_key", 00:13:29.519 "req_id": 1 00:13:29.519 } 00:13:29.519 Got JSON-RPC error response 00:13:29.519 response: 00:13:29.519 { 00:13:29.519 "code": -1, 00:13:29.519 "message": "Operation not permitted" 00:13:29.519 } 00:13:29.519 09:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:29.779 [2024-10-16 09:27:54.096773] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:29.779 [2024-10-16 09:27:54.096836] bdev_nvme.c:6391:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:13:29.779 request: 00:13:29.779 { 00:13:29.779 "name": "TLSTEST", 00:13:29.779 "trtype": "tcp", 00:13:29.779 "traddr": "10.0.0.3", 00:13:29.779 "adrfam": "ipv4", 00:13:29.779 "trsvcid": "4420", 00:13:29.779 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:29.779 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:29.779 "prchk_reftag": false, 00:13:29.779 "prchk_guard": false, 00:13:29.779 "hdgst": false, 00:13:29.779 "ddgst": false, 00:13:29.779 "psk": "key0", 00:13:29.779 "allow_unrecognized_csi": false, 00:13:29.779 "method": "bdev_nvme_attach_controller", 00:13:29.779 "req_id": 1 00:13:29.779 } 00:13:29.779 Got JSON-RPC error response 00:13:29.779 response: 00:13:29.779 { 00:13:29.779 "code": -126, 00:13:29.779 "message": "Required key not available" 00:13:29.779 } 00:13:29.779 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71532 00:13:29.779 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 71532 ']' 00:13:29.779 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 71532 00:13:29.779 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:29.779 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:29.779 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71532 00:13:29.779 killing process with pid 71532 00:13:29.779 Received shutdown signal, test time was about 10.000000 seconds 00:13:29.779 00:13:29.779 Latency(us) 00:13:29.779 [2024-10-16T09:27:54.183Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:29.779 [2024-10-16T09:27:54.183Z] =================================================================================================================== 00:13:29.779 [2024-10-16T09:27:54.183Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:29.779 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:29.779 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:29.779 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71532' 00:13:29.779 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 71532 00:13:29.779 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 71532 00:13:30.038 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:30.038 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:30.038 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:30.038 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:30.038 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:30.038 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 71100 00:13:30.038 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 71100 ']' 00:13:30.038 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 71100 00:13:30.038 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:30.038 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:30.038 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71100 00:13:30.038 killing process with pid 71100 00:13:30.038 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:30.038 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:30.038 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71100' 00:13:30.038 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 71100 00:13:30.038 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 71100 00:13:30.297 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:13:30.297 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:13:30.297 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:13:30.297 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:13:30.297 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:13:30.297 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=2 00:13:30.297 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:13:30.297 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:30.297 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:13:30.297 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.35hJ8IbUnf 00:13:30.297 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:30.297 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.35hJ8IbUnf 00:13:30.297 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:13:30.297 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:30.297 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:30.297 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:30.297 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=71569 00:13:30.297 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 71569 00:13:30.297 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:30.297 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 71569 ']' 00:13:30.297 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:30.297 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:30.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:30.297 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:30.297 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:30.297 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:30.297 [2024-10-16 09:27:54.664024] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:13:30.297 [2024-10-16 09:27:54.664104] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:30.556 [2024-10-16 09:27:54.797078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.556 [2024-10-16 09:27:54.838273] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:30.556 [2024-10-16 09:27:54.838338] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:30.556 [2024-10-16 09:27:54.838364] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:30.556 [2024-10-16 09:27:54.838371] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:30.556 [2024-10-16 09:27:54.838378] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:30.556 [2024-10-16 09:27:54.838765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:30.556 [2024-10-16 09:27:54.890731] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:30.556 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:30.556 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:30.556 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:30.556 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:30.556 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:30.815 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:30.815 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.35hJ8IbUnf 00:13:30.815 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.35hJ8IbUnf 00:13:30.815 09:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:30.815 [2024-10-16 09:27:55.194716] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:30.815 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:31.382 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:31.382 [2024-10-16 09:27:55.690779] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:31.382 [2024-10-16 09:27:55.691017] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:31.382 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:31.641 malloc0 00:13:31.641 09:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:31.899 09:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.35hJ8IbUnf 00:13:32.159 09:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:32.159 09:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.35hJ8IbUnf 00:13:32.159 09:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:32.159 09:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:32.159 09:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:32.159 09:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.35hJ8IbUnf 00:13:32.159 09:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:32.159 09:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:32.159 09:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71610 00:13:32.159 09:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:32.159 09:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71610 /var/tmp/bdevperf.sock 00:13:32.159 09:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 71610 ']' 00:13:32.159 09:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:32.159 09:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:32.159 09:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:32.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:32.159 09:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:32.159 09:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:32.417 [2024-10-16 09:27:56.581146] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:13:32.417 [2024-10-16 09:27:56.581250] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71610 ] 00:13:32.417 [2024-10-16 09:27:56.716533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.417 [2024-10-16 09:27:56.769012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:32.677 [2024-10-16 09:27:56.825530] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:32.677 09:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:32.677 09:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:32.677 09:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.35hJ8IbUnf 00:13:32.935 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:32.935 [2024-10-16 09:27:57.313337] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:33.194 TLSTESTn1 00:13:33.194 09:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:33.194 Running I/O for 10 seconds... 00:13:35.508 4608.00 IOPS, 18.00 MiB/s [2024-10-16T09:28:00.849Z] 4610.50 IOPS, 18.01 MiB/s [2024-10-16T09:28:01.785Z] 4671.67 IOPS, 18.25 MiB/s [2024-10-16T09:28:02.721Z] 4713.00 IOPS, 18.41 MiB/s [2024-10-16T09:28:03.658Z] 4730.80 IOPS, 18.48 MiB/s [2024-10-16T09:28:04.593Z] 4741.83 IOPS, 18.52 MiB/s [2024-10-16T09:28:05.530Z] 4750.86 IOPS, 18.56 MiB/s [2024-10-16T09:28:06.909Z] 4762.75 IOPS, 18.60 MiB/s [2024-10-16T09:28:07.492Z] 4764.33 IOPS, 18.61 MiB/s [2024-10-16T09:28:07.751Z] 4766.10 IOPS, 18.62 MiB/s 00:13:43.347 Latency(us) 00:13:43.347 [2024-10-16T09:28:07.751Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:43.347 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:43.347 Verification LBA range: start 0x0 length 0x2000 00:13:43.347 TLSTESTn1 : 10.01 4771.65 18.64 0.00 0.00 26780.39 4766.25 21448.15 00:13:43.347 [2024-10-16T09:28:07.751Z] =================================================================================================================== 00:13:43.347 [2024-10-16T09:28:07.751Z] Total : 4771.65 18.64 0.00 0.00 26780.39 4766.25 21448.15 00:13:43.347 { 00:13:43.347 "results": [ 00:13:43.347 { 00:13:43.347 "job": "TLSTESTn1", 00:13:43.347 "core_mask": "0x4", 00:13:43.347 "workload": "verify", 00:13:43.347 "status": "finished", 00:13:43.347 "verify_range": { 00:13:43.347 "start": 0, 00:13:43.347 "length": 8192 00:13:43.347 }, 00:13:43.347 "queue_depth": 128, 00:13:43.347 "io_size": 4096, 00:13:43.347 "runtime": 10.014782, 00:13:43.347 "iops": 4771.646552066735, 00:13:43.347 "mibps": 18.639244344010685, 00:13:43.347 "io_failed": 0, 00:13:43.347 "io_timeout": 0, 00:13:43.347 "avg_latency_us": 26780.3908571559, 00:13:43.347 "min_latency_us": 4766.254545454545, 00:13:43.347 "max_latency_us": 21448.145454545454 00:13:43.347 } 00:13:43.347 ], 00:13:43.347 "core_count": 1 00:13:43.347 } 00:13:43.347 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:43.347 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71610 00:13:43.347 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 71610 ']' 00:13:43.347 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 71610 00:13:43.347 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:43.347 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:43.347 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71610 00:13:43.347 killing process with pid 71610 00:13:43.347 Received shutdown signal, test time was about 10.000000 seconds 00:13:43.347 00:13:43.347 Latency(us) 00:13:43.347 [2024-10-16T09:28:07.751Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:43.347 [2024-10-16T09:28:07.751Z] =================================================================================================================== 00:13:43.347 [2024-10-16T09:28:07.751Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:43.347 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:43.347 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:43.347 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71610' 00:13:43.347 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 71610 00:13:43.347 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 71610 00:13:43.347 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.35hJ8IbUnf 00:13:43.347 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.35hJ8IbUnf 00:13:43.347 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:43.347 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.35hJ8IbUnf 00:13:43.347 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:43.347 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:43.347 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:43.347 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:43.347 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.35hJ8IbUnf 00:13:43.606 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:43.606 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:43.606 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:43.606 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.35hJ8IbUnf 00:13:43.606 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:43.606 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71734 00:13:43.606 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:43.606 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71734 /var/tmp/bdevperf.sock 00:13:43.606 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:43.606 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 71734 ']' 00:13:43.606 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:43.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:43.606 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:43.606 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:43.606 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:43.606 09:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:43.606 [2024-10-16 09:28:07.812048] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:13:43.606 [2024-10-16 09:28:07.812161] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71734 ] 00:13:43.606 [2024-10-16 09:28:07.948755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.606 [2024-10-16 09:28:07.991060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:43.865 [2024-10-16 09:28:08.042953] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:43.865 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:43.865 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:43.865 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.35hJ8IbUnf 00:13:44.123 [2024-10-16 09:28:08.351181] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.35hJ8IbUnf': 0100666 00:13:44.123 [2024-10-16 09:28:08.351230] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:13:44.123 request: 00:13:44.123 { 00:13:44.123 "name": "key0", 00:13:44.123 "path": "/tmp/tmp.35hJ8IbUnf", 00:13:44.123 "method": "keyring_file_add_key", 00:13:44.123 "req_id": 1 00:13:44.123 } 00:13:44.123 Got JSON-RPC error response 00:13:44.123 response: 00:13:44.123 { 00:13:44.123 "code": -1, 00:13:44.123 "message": "Operation not permitted" 00:13:44.123 } 00:13:44.123 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:44.382 [2024-10-16 09:28:08.571336] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:44.382 [2024-10-16 09:28:08.571415] bdev_nvme.c:6391:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:13:44.382 request: 00:13:44.382 { 00:13:44.382 "name": "TLSTEST", 00:13:44.382 "trtype": "tcp", 00:13:44.382 "traddr": "10.0.0.3", 00:13:44.382 "adrfam": "ipv4", 00:13:44.382 "trsvcid": "4420", 00:13:44.382 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:44.382 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:44.382 "prchk_reftag": false, 00:13:44.382 "prchk_guard": false, 00:13:44.382 "hdgst": false, 00:13:44.382 "ddgst": false, 00:13:44.382 "psk": "key0", 00:13:44.382 "allow_unrecognized_csi": false, 00:13:44.382 "method": "bdev_nvme_attach_controller", 00:13:44.382 "req_id": 1 00:13:44.382 } 00:13:44.382 Got JSON-RPC error response 00:13:44.382 response: 00:13:44.382 { 00:13:44.382 "code": -126, 00:13:44.382 "message": "Required key not available" 00:13:44.382 } 00:13:44.382 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71734 00:13:44.382 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 71734 ']' 00:13:44.382 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 71734 00:13:44.382 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:44.382 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:44.382 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71734 00:13:44.382 killing process with pid 71734 00:13:44.382 Received shutdown signal, test time was about 10.000000 seconds 00:13:44.382 00:13:44.382 Latency(us) 00:13:44.382 [2024-10-16T09:28:08.786Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.382 [2024-10-16T09:28:08.786Z] =================================================================================================================== 00:13:44.382 [2024-10-16T09:28:08.786Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:44.382 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:44.382 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:44.382 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71734' 00:13:44.382 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 71734 00:13:44.382 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 71734 00:13:44.641 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:44.641 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:44.641 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:44.641 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:44.641 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:44.641 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 71569 00:13:44.641 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 71569 ']' 00:13:44.641 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 71569 00:13:44.641 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:44.641 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:44.641 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71569 00:13:44.641 killing process with pid 71569 00:13:44.641 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:44.641 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:44.641 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71569' 00:13:44.641 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 71569 00:13:44.641 09:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 71569 00:13:44.641 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:13:44.641 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:44.642 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:44.642 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:44.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.642 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=71770 00:13:44.642 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 71770 00:13:44.642 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 71770 ']' 00:13:44.642 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:44.642 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.642 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:44.642 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.642 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:44.642 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:44.900 [2024-10-16 09:28:09.074683] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:13:44.900 [2024-10-16 09:28:09.074782] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:44.900 [2024-10-16 09:28:09.205431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.901 [2024-10-16 09:28:09.244332] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:44.901 [2024-10-16 09:28:09.244402] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:44.901 [2024-10-16 09:28:09.244428] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:44.901 [2024-10-16 09:28:09.244436] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:44.901 [2024-10-16 09:28:09.244442] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:44.901 [2024-10-16 09:28:09.244856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:44.901 [2024-10-16 09:28:09.295605] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:45.159 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:45.159 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:45.159 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:45.159 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:45.159 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:45.159 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:45.159 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.35hJ8IbUnf 00:13:45.159 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:45.159 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.35hJ8IbUnf 00:13:45.159 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:13:45.159 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:45.159 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:13:45.159 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:45.159 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.35hJ8IbUnf 00:13:45.159 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.35hJ8IbUnf 00:13:45.160 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:45.418 [2024-10-16 09:28:09.600830] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:45.418 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:45.677 09:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:45.936 [2024-10-16 09:28:10.156972] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:45.936 [2024-10-16 09:28:10.157255] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:45.936 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:46.195 malloc0 00:13:46.195 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:46.453 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.35hJ8IbUnf 00:13:46.453 [2024-10-16 09:28:10.855107] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.35hJ8IbUnf': 0100666 00:13:46.453 [2024-10-16 09:28:10.855159] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:13:46.710 request: 00:13:46.710 { 00:13:46.710 "name": "key0", 00:13:46.710 "path": "/tmp/tmp.35hJ8IbUnf", 00:13:46.710 "method": "keyring_file_add_key", 00:13:46.710 "req_id": 1 00:13:46.710 } 00:13:46.710 Got JSON-RPC error response 00:13:46.710 response: 00:13:46.710 { 00:13:46.710 "code": -1, 00:13:46.710 "message": "Operation not permitted" 00:13:46.710 } 00:13:46.710 09:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:46.710 [2024-10-16 09:28:11.115190] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:13:46.710 [2024-10-16 09:28:11.115289] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:13:46.969 request: 00:13:46.969 { 00:13:46.969 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:46.969 "host": "nqn.2016-06.io.spdk:host1", 00:13:46.969 "psk": "key0", 00:13:46.969 "method": "nvmf_subsystem_add_host", 00:13:46.969 "req_id": 1 00:13:46.969 } 00:13:46.969 Got JSON-RPC error response 00:13:46.969 response: 00:13:46.969 { 00:13:46.969 "code": -32603, 00:13:46.969 "message": "Internal error" 00:13:46.969 } 00:13:46.969 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:46.969 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:46.969 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:46.969 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:46.969 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 71770 00:13:46.969 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 71770 ']' 00:13:46.969 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 71770 00:13:46.969 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:46.969 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:46.969 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71770 00:13:46.969 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:46.969 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:46.969 killing process with pid 71770 00:13:46.969 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71770' 00:13:46.969 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 71770 00:13:46.969 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 71770 00:13:46.969 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.35hJ8IbUnf 00:13:46.969 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:13:46.969 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:46.969 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:46.969 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:46.969 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=71827 00:13:46.969 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:46.969 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 71827 00:13:46.969 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 71827 ']' 00:13:46.969 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.969 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:46.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:46.969 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.969 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:46.969 09:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:47.227 [2024-10-16 09:28:11.425348] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:13:47.227 [2024-10-16 09:28:11.425451] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:47.227 [2024-10-16 09:28:11.555653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.227 [2024-10-16 09:28:11.603428] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:47.227 [2024-10-16 09:28:11.603481] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:47.227 [2024-10-16 09:28:11.603507] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:47.227 [2024-10-16 09:28:11.603514] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:47.227 [2024-10-16 09:28:11.603520] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:47.227 [2024-10-16 09:28:11.603968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:47.485 [2024-10-16 09:28:11.655943] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:48.050 09:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:48.050 09:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:48.050 09:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:48.050 09:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:48.050 09:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:48.050 09:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:48.050 09:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.35hJ8IbUnf 00:13:48.050 09:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.35hJ8IbUnf 00:13:48.050 09:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:48.307 [2024-10-16 09:28:12.611969] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:48.307 09:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:48.564 09:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:48.823 [2024-10-16 09:28:13.120082] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:48.823 [2024-10-16 09:28:13.120306] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:48.823 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:49.105 malloc0 00:13:49.106 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:49.364 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.35hJ8IbUnf 00:13:49.622 09:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:49.881 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=71883 00:13:49.881 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:49.881 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:49.881 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 71883 /var/tmp/bdevperf.sock 00:13:49.881 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 71883 ']' 00:13:49.881 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:49.881 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:49.881 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:49.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:49.881 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:49.881 09:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:49.881 [2024-10-16 09:28:14.181537] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:13:49.881 [2024-10-16 09:28:14.181666] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71883 ] 00:13:50.140 [2024-10-16 09:28:14.315343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.140 [2024-10-16 09:28:14.370800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:50.140 [2024-10-16 09:28:14.426889] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:51.076 09:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:51.076 09:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:51.076 09:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.35hJ8IbUnf 00:13:51.077 09:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:51.335 [2024-10-16 09:28:15.526811] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:51.335 TLSTESTn1 00:13:51.335 09:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:51.595 09:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:13:51.595 "subsystems": [ 00:13:51.595 { 00:13:51.595 "subsystem": "keyring", 00:13:51.595 "config": [ 00:13:51.595 { 00:13:51.595 "method": "keyring_file_add_key", 00:13:51.595 "params": { 00:13:51.595 "name": "key0", 00:13:51.595 "path": "/tmp/tmp.35hJ8IbUnf" 00:13:51.595 } 00:13:51.595 } 00:13:51.595 ] 00:13:51.595 }, 00:13:51.595 { 00:13:51.595 "subsystem": "iobuf", 00:13:51.595 "config": [ 00:13:51.595 { 00:13:51.595 "method": "iobuf_set_options", 00:13:51.595 "params": { 00:13:51.595 "small_pool_count": 8192, 00:13:51.595 "large_pool_count": 1024, 00:13:51.595 "small_bufsize": 8192, 00:13:51.595 "large_bufsize": 135168 00:13:51.595 } 00:13:51.595 } 00:13:51.595 ] 00:13:51.595 }, 00:13:51.595 { 00:13:51.595 "subsystem": "sock", 00:13:51.595 "config": [ 00:13:51.595 { 00:13:51.595 "method": "sock_set_default_impl", 00:13:51.595 "params": { 00:13:51.595 "impl_name": "uring" 00:13:51.595 } 00:13:51.595 }, 00:13:51.595 { 00:13:51.595 "method": "sock_impl_set_options", 00:13:51.595 "params": { 00:13:51.595 "impl_name": "ssl", 00:13:51.595 "recv_buf_size": 4096, 00:13:51.595 "send_buf_size": 4096, 00:13:51.595 "enable_recv_pipe": true, 00:13:51.595 "enable_quickack": false, 00:13:51.595 "enable_placement_id": 0, 00:13:51.595 "enable_zerocopy_send_server": true, 00:13:51.595 "enable_zerocopy_send_client": false, 00:13:51.595 "zerocopy_threshold": 0, 00:13:51.595 "tls_version": 0, 00:13:51.595 "enable_ktls": false 00:13:51.595 } 00:13:51.595 }, 00:13:51.595 { 00:13:51.595 "method": "sock_impl_set_options", 00:13:51.595 "params": { 00:13:51.595 "impl_name": "posix", 00:13:51.595 "recv_buf_size": 2097152, 00:13:51.595 "send_buf_size": 2097152, 00:13:51.595 "enable_recv_pipe": true, 00:13:51.595 "enable_quickack": false, 00:13:51.595 "enable_placement_id": 0, 00:13:51.595 "enable_zerocopy_send_server": true, 00:13:51.595 "enable_zerocopy_send_client": false, 00:13:51.595 "zerocopy_threshold": 0, 00:13:51.595 "tls_version": 0, 00:13:51.595 "enable_ktls": false 00:13:51.595 } 00:13:51.595 }, 00:13:51.595 { 00:13:51.595 "method": "sock_impl_set_options", 00:13:51.595 "params": { 00:13:51.595 "impl_name": "uring", 00:13:51.595 "recv_buf_size": 2097152, 00:13:51.595 "send_buf_size": 2097152, 00:13:51.595 "enable_recv_pipe": true, 00:13:51.595 "enable_quickack": false, 00:13:51.595 "enable_placement_id": 0, 00:13:51.595 "enable_zerocopy_send_server": false, 00:13:51.595 "enable_zerocopy_send_client": false, 00:13:51.595 "zerocopy_threshold": 0, 00:13:51.595 "tls_version": 0, 00:13:51.595 "enable_ktls": false 00:13:51.595 } 00:13:51.595 } 00:13:51.595 ] 00:13:51.595 }, 00:13:51.595 { 00:13:51.595 "subsystem": "vmd", 00:13:51.595 "config": [] 00:13:51.595 }, 00:13:51.595 { 00:13:51.595 "subsystem": "accel", 00:13:51.595 "config": [ 00:13:51.595 { 00:13:51.595 "method": "accel_set_options", 00:13:51.595 "params": { 00:13:51.595 "small_cache_size": 128, 00:13:51.595 "large_cache_size": 16, 00:13:51.595 "task_count": 2048, 00:13:51.595 "sequence_count": 2048, 00:13:51.595 "buf_count": 2048 00:13:51.595 } 00:13:51.595 } 00:13:51.595 ] 00:13:51.595 }, 00:13:51.595 { 00:13:51.595 "subsystem": "bdev", 00:13:51.595 "config": [ 00:13:51.595 { 00:13:51.595 "method": "bdev_set_options", 00:13:51.595 "params": { 00:13:51.595 "bdev_io_pool_size": 65535, 00:13:51.595 "bdev_io_cache_size": 256, 00:13:51.595 "bdev_auto_examine": true, 00:13:51.595 "iobuf_small_cache_size": 128, 00:13:51.595 "iobuf_large_cache_size": 16 00:13:51.595 } 00:13:51.595 }, 00:13:51.595 { 00:13:51.595 "method": "bdev_raid_set_options", 00:13:51.595 "params": { 00:13:51.595 "process_window_size_kb": 1024, 00:13:51.595 "process_max_bandwidth_mb_sec": 0 00:13:51.595 } 00:13:51.595 }, 00:13:51.595 { 00:13:51.595 "method": "bdev_iscsi_set_options", 00:13:51.595 "params": { 00:13:51.595 "timeout_sec": 30 00:13:51.595 } 00:13:51.595 }, 00:13:51.595 { 00:13:51.595 "method": "bdev_nvme_set_options", 00:13:51.595 "params": { 00:13:51.595 "action_on_timeout": "none", 00:13:51.595 "timeout_us": 0, 00:13:51.595 "timeout_admin_us": 0, 00:13:51.595 "keep_alive_timeout_ms": 10000, 00:13:51.595 "arbitration_burst": 0, 00:13:51.595 "low_priority_weight": 0, 00:13:51.595 "medium_priority_weight": 0, 00:13:51.595 "high_priority_weight": 0, 00:13:51.595 "nvme_adminq_poll_period_us": 10000, 00:13:51.595 "nvme_ioq_poll_period_us": 0, 00:13:51.595 "io_queue_requests": 0, 00:13:51.595 "delay_cmd_submit": true, 00:13:51.595 "transport_retry_count": 4, 00:13:51.595 "bdev_retry_count": 3, 00:13:51.595 "transport_ack_timeout": 0, 00:13:51.595 "ctrlr_loss_timeout_sec": 0, 00:13:51.595 "reconnect_delay_sec": 0, 00:13:51.595 "fast_io_fail_timeout_sec": 0, 00:13:51.595 "disable_auto_failback": false, 00:13:51.595 "generate_uuids": false, 00:13:51.595 "transport_tos": 0, 00:13:51.595 "nvme_error_stat": false, 00:13:51.595 "rdma_srq_size": 0, 00:13:51.595 "io_path_stat": false, 00:13:51.595 "allow_accel_sequence": false, 00:13:51.595 "rdma_max_cq_size": 0, 00:13:51.595 "rdma_cm_event_timeout_ms": 0, 00:13:51.595 "dhchap_digests": [ 00:13:51.595 "sha256", 00:13:51.595 "sha384", 00:13:51.595 "sha512" 00:13:51.595 ], 00:13:51.595 "dhchap_dhgroups": [ 00:13:51.595 "null", 00:13:51.595 "ffdhe2048", 00:13:51.595 "ffdhe3072", 00:13:51.595 "ffdhe4096", 00:13:51.595 "ffdhe6144", 00:13:51.595 "ffdhe8192" 00:13:51.595 ] 00:13:51.595 } 00:13:51.595 }, 00:13:51.595 { 00:13:51.595 "method": "bdev_nvme_set_hotplug", 00:13:51.595 "params": { 00:13:51.595 "period_us": 100000, 00:13:51.595 "enable": false 00:13:51.595 } 00:13:51.595 }, 00:13:51.595 { 00:13:51.595 "method": "bdev_malloc_create", 00:13:51.595 "params": { 00:13:51.595 "name": "malloc0", 00:13:51.595 "num_blocks": 8192, 00:13:51.595 "block_size": 4096, 00:13:51.595 "physical_block_size": 4096, 00:13:51.595 "uuid": "38b57d56-3d5e-4b80-84c1-ed4719646145", 00:13:51.595 "optimal_io_boundary": 0, 00:13:51.595 "md_size": 0, 00:13:51.595 "dif_type": 0, 00:13:51.595 "dif_is_head_of_md": false, 00:13:51.595 "dif_pi_format": 0 00:13:51.595 } 00:13:51.595 }, 00:13:51.595 { 00:13:51.595 "method": "bdev_wait_for_examine" 00:13:51.595 } 00:13:51.595 ] 00:13:51.595 }, 00:13:51.595 { 00:13:51.595 "subsystem": "nbd", 00:13:51.595 "config": [] 00:13:51.595 }, 00:13:51.595 { 00:13:51.595 "subsystem": "scheduler", 00:13:51.595 "config": [ 00:13:51.595 { 00:13:51.595 "method": "framework_set_scheduler", 00:13:51.595 "params": { 00:13:51.595 "name": "static" 00:13:51.595 } 00:13:51.595 } 00:13:51.595 ] 00:13:51.595 }, 00:13:51.595 { 00:13:51.595 "subsystem": "nvmf", 00:13:51.595 "config": [ 00:13:51.595 { 00:13:51.595 "method": "nvmf_set_config", 00:13:51.595 "params": { 00:13:51.595 "discovery_filter": "match_any", 00:13:51.595 "admin_cmd_passthru": { 00:13:51.595 "identify_ctrlr": false 00:13:51.595 }, 00:13:51.595 "dhchap_digests": [ 00:13:51.595 "sha256", 00:13:51.595 "sha384", 00:13:51.596 "sha512" 00:13:51.596 ], 00:13:51.596 "dhchap_dhgroups": [ 00:13:51.596 "null", 00:13:51.596 "ffdhe2048", 00:13:51.596 "ffdhe3072", 00:13:51.596 "ffdhe4096", 00:13:51.596 "ffdhe6144", 00:13:51.596 "ffdhe8192" 00:13:51.596 ] 00:13:51.596 } 00:13:51.596 }, 00:13:51.596 { 00:13:51.596 "method": "nvmf_set_max_subsystems", 00:13:51.596 "params": { 00:13:51.596 "max_subsystems": 1024 00:13:51.596 } 00:13:51.596 }, 00:13:51.596 { 00:13:51.596 "method": "nvmf_set_crdt", 00:13:51.596 "params": { 00:13:51.596 "crdt1": 0, 00:13:51.596 "crdt2": 0, 00:13:51.596 "crdt3": 0 00:13:51.596 } 00:13:51.596 }, 00:13:51.596 { 00:13:51.596 "method": "nvmf_create_transport", 00:13:51.596 "params": { 00:13:51.596 "trtype": "TCP", 00:13:51.596 "max_queue_depth": 128, 00:13:51.596 "max_io_qpairs_per_ctrlr": 127, 00:13:51.596 "in_capsule_data_size": 4096, 00:13:51.596 "max_io_size": 131072, 00:13:51.596 "io_unit_size": 131072, 00:13:51.596 "max_aq_depth": 128, 00:13:51.596 "num_shared_buffers": 511, 00:13:51.596 "buf_cache_size": 4294967295, 00:13:51.596 "dif_insert_or_strip": false, 00:13:51.596 "zcopy": false, 00:13:51.596 "c2h_success": false, 00:13:51.596 "sock_priority": 0, 00:13:51.596 "abort_timeout_sec": 1, 00:13:51.596 "ack_timeout": 0, 00:13:51.596 "data_wr_pool_size": 0 00:13:51.596 } 00:13:51.596 }, 00:13:51.596 { 00:13:51.596 "method": "nvmf_create_subsystem", 00:13:51.596 "params": { 00:13:51.596 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:51.596 "allow_any_host": false, 00:13:51.596 "serial_number": "SPDK00000000000001", 00:13:51.596 "model_number": "SPDK bdev Controller", 00:13:51.596 "max_namespaces": 10, 00:13:51.596 "min_cntlid": 1, 00:13:51.596 "max_cntlid": 65519, 00:13:51.596 "ana_reporting": false 00:13:51.596 } 00:13:51.596 }, 00:13:51.596 { 00:13:51.596 "method": "nvmf_subsystem_add_host", 00:13:51.596 "params": { 00:13:51.596 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:51.596 "host": "nqn.2016-06.io.spdk:host1", 00:13:51.596 "psk": "key0" 00:13:51.596 } 00:13:51.596 }, 00:13:51.596 { 00:13:51.596 "method": "nvmf_subsystem_add_ns", 00:13:51.596 "params": { 00:13:51.596 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:51.596 "namespace": { 00:13:51.596 "nsid": 1, 00:13:51.596 "bdev_name": "malloc0", 00:13:51.596 "nguid": "38B57D563D5E4B8084C1ED4719646145", 00:13:51.596 "uuid": "38b57d56-3d5e-4b80-84c1-ed4719646145", 00:13:51.596 "no_auto_visible": false 00:13:51.596 } 00:13:51.596 } 00:13:51.596 }, 00:13:51.596 { 00:13:51.596 "method": "nvmf_subsystem_add_listener", 00:13:51.596 "params": { 00:13:51.596 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:51.596 "listen_address": { 00:13:51.596 "trtype": "TCP", 00:13:51.596 "adrfam": "IPv4", 00:13:51.596 "traddr": "10.0.0.3", 00:13:51.596 "trsvcid": "4420" 00:13:51.596 }, 00:13:51.596 "secure_channel": true 00:13:51.596 } 00:13:51.596 } 00:13:51.596 ] 00:13:51.596 } 00:13:51.596 ] 00:13:51.596 }' 00:13:51.596 09:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:13:52.164 09:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:13:52.164 "subsystems": [ 00:13:52.164 { 00:13:52.164 "subsystem": "keyring", 00:13:52.164 "config": [ 00:13:52.164 { 00:13:52.164 "method": "keyring_file_add_key", 00:13:52.164 "params": { 00:13:52.164 "name": "key0", 00:13:52.164 "path": "/tmp/tmp.35hJ8IbUnf" 00:13:52.164 } 00:13:52.164 } 00:13:52.164 ] 00:13:52.164 }, 00:13:52.164 { 00:13:52.164 "subsystem": "iobuf", 00:13:52.164 "config": [ 00:13:52.164 { 00:13:52.164 "method": "iobuf_set_options", 00:13:52.164 "params": { 00:13:52.164 "small_pool_count": 8192, 00:13:52.164 "large_pool_count": 1024, 00:13:52.164 "small_bufsize": 8192, 00:13:52.164 "large_bufsize": 135168 00:13:52.164 } 00:13:52.164 } 00:13:52.164 ] 00:13:52.164 }, 00:13:52.164 { 00:13:52.164 "subsystem": "sock", 00:13:52.164 "config": [ 00:13:52.164 { 00:13:52.164 "method": "sock_set_default_impl", 00:13:52.164 "params": { 00:13:52.164 "impl_name": "uring" 00:13:52.164 } 00:13:52.164 }, 00:13:52.164 { 00:13:52.164 "method": "sock_impl_set_options", 00:13:52.164 "params": { 00:13:52.164 "impl_name": "ssl", 00:13:52.164 "recv_buf_size": 4096, 00:13:52.164 "send_buf_size": 4096, 00:13:52.164 "enable_recv_pipe": true, 00:13:52.164 "enable_quickack": false, 00:13:52.164 "enable_placement_id": 0, 00:13:52.164 "enable_zerocopy_send_server": true, 00:13:52.164 "enable_zerocopy_send_client": false, 00:13:52.164 "zerocopy_threshold": 0, 00:13:52.164 "tls_version": 0, 00:13:52.164 "enable_ktls": false 00:13:52.164 } 00:13:52.164 }, 00:13:52.164 { 00:13:52.164 "method": "sock_impl_set_options", 00:13:52.164 "params": { 00:13:52.164 "impl_name": "posix", 00:13:52.164 "recv_buf_size": 2097152, 00:13:52.164 "send_buf_size": 2097152, 00:13:52.164 "enable_recv_pipe": true, 00:13:52.164 "enable_quickack": false, 00:13:52.164 "enable_placement_id": 0, 00:13:52.164 "enable_zerocopy_send_server": true, 00:13:52.164 "enable_zerocopy_send_client": false, 00:13:52.164 "zerocopy_threshold": 0, 00:13:52.164 "tls_version": 0, 00:13:52.164 "enable_ktls": false 00:13:52.164 } 00:13:52.164 }, 00:13:52.164 { 00:13:52.164 "method": "sock_impl_set_options", 00:13:52.164 "params": { 00:13:52.164 "impl_name": "uring", 00:13:52.164 "recv_buf_size": 2097152, 00:13:52.164 "send_buf_size": 2097152, 00:13:52.164 "enable_recv_pipe": true, 00:13:52.164 "enable_quickack": false, 00:13:52.164 "enable_placement_id": 0, 00:13:52.164 "enable_zerocopy_send_server": false, 00:13:52.164 "enable_zerocopy_send_client": false, 00:13:52.164 "zerocopy_threshold": 0, 00:13:52.164 "tls_version": 0, 00:13:52.164 "enable_ktls": false 00:13:52.164 } 00:13:52.164 } 00:13:52.164 ] 00:13:52.164 }, 00:13:52.164 { 00:13:52.164 "subsystem": "vmd", 00:13:52.164 "config": [] 00:13:52.164 }, 00:13:52.164 { 00:13:52.164 "subsystem": "accel", 00:13:52.164 "config": [ 00:13:52.164 { 00:13:52.164 "method": "accel_set_options", 00:13:52.164 "params": { 00:13:52.164 "small_cache_size": 128, 00:13:52.165 "large_cache_size": 16, 00:13:52.165 "task_count": 2048, 00:13:52.165 "sequence_count": 2048, 00:13:52.165 "buf_count": 2048 00:13:52.165 } 00:13:52.165 } 00:13:52.165 ] 00:13:52.165 }, 00:13:52.165 { 00:13:52.165 "subsystem": "bdev", 00:13:52.165 "config": [ 00:13:52.165 { 00:13:52.165 "method": "bdev_set_options", 00:13:52.165 "params": { 00:13:52.165 "bdev_io_pool_size": 65535, 00:13:52.165 "bdev_io_cache_size": 256, 00:13:52.165 "bdev_auto_examine": true, 00:13:52.165 "iobuf_small_cache_size": 128, 00:13:52.165 "iobuf_large_cache_size": 16 00:13:52.165 } 00:13:52.165 }, 00:13:52.165 { 00:13:52.165 "method": "bdev_raid_set_options", 00:13:52.165 "params": { 00:13:52.165 "process_window_size_kb": 1024, 00:13:52.165 "process_max_bandwidth_mb_sec": 0 00:13:52.165 } 00:13:52.165 }, 00:13:52.165 { 00:13:52.165 "method": "bdev_iscsi_set_options", 00:13:52.165 "params": { 00:13:52.165 "timeout_sec": 30 00:13:52.165 } 00:13:52.165 }, 00:13:52.165 { 00:13:52.165 "method": "bdev_nvme_set_options", 00:13:52.165 "params": { 00:13:52.165 "action_on_timeout": "none", 00:13:52.165 "timeout_us": 0, 00:13:52.165 "timeout_admin_us": 0, 00:13:52.165 "keep_alive_timeout_ms": 10000, 00:13:52.165 "arbitration_burst": 0, 00:13:52.165 "low_priority_weight": 0, 00:13:52.165 "medium_priority_weight": 0, 00:13:52.165 "high_priority_weight": 0, 00:13:52.165 "nvme_adminq_poll_period_us": 10000, 00:13:52.165 "nvme_ioq_poll_period_us": 0, 00:13:52.165 "io_queue_requests": 512, 00:13:52.165 "delay_cmd_submit": true, 00:13:52.165 "transport_retry_count": 4, 00:13:52.165 "bdev_retry_count": 3, 00:13:52.165 "transport_ack_timeout": 0, 00:13:52.165 "ctrlr_loss_timeout_sec": 0, 00:13:52.165 "reconnect_delay_sec": 0, 00:13:52.165 "fast_io_fail_timeout_sec": 0, 00:13:52.165 "disable_auto_failback": false, 00:13:52.165 "generate_uuids": false, 00:13:52.165 "transport_tos": 0, 00:13:52.165 "nvme_error_stat": false, 00:13:52.165 "rdma_srq_size": 0, 00:13:52.165 "io_path_stat": false, 00:13:52.165 "allow_accel_sequence": false, 00:13:52.165 "rdma_max_cq_size": 0, 00:13:52.165 "rdma_cm_event_timeout_ms": 0, 00:13:52.165 "dhchap_digests": [ 00:13:52.165 "sha256", 00:13:52.165 "sha384", 00:13:52.165 "sha512" 00:13:52.165 ], 00:13:52.165 "dhchap_dhgroups": [ 00:13:52.165 "null", 00:13:52.165 "ffdhe2048", 00:13:52.165 "ffdhe3072", 00:13:52.165 "ffdhe4096", 00:13:52.165 "ffdhe6144", 00:13:52.165 "ffdhe8192" 00:13:52.165 ] 00:13:52.165 } 00:13:52.165 }, 00:13:52.165 { 00:13:52.165 "method": "bdev_nvme_attach_controller", 00:13:52.165 "params": { 00:13:52.165 "name": "TLSTEST", 00:13:52.165 "trtype": "TCP", 00:13:52.165 "adrfam": "IPv4", 00:13:52.165 "traddr": "10.0.0.3", 00:13:52.165 "trsvcid": "4420", 00:13:52.165 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:52.165 "prchk_reftag": false, 00:13:52.165 "prchk_guard": false, 00:13:52.165 "ctrlr_loss_timeout_sec": 0, 00:13:52.165 "reconnect_delay_sec": 0, 00:13:52.165 "fast_io_fail_timeout_sec": 0, 00:13:52.165 "psk": "key0", 00:13:52.165 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:52.165 "hdgst": false, 00:13:52.165 "ddgst": false, 00:13:52.165 "multipath": "multipath" 00:13:52.165 } 00:13:52.165 }, 00:13:52.165 { 00:13:52.165 "method": "bdev_nvme_set_hotplug", 00:13:52.165 "params": { 00:13:52.165 "period_us": 100000, 00:13:52.165 "enable": false 00:13:52.165 } 00:13:52.165 }, 00:13:52.165 { 00:13:52.165 "method": "bdev_wait_for_examine" 00:13:52.165 } 00:13:52.165 ] 00:13:52.165 }, 00:13:52.165 { 00:13:52.165 "subsystem": "nbd", 00:13:52.165 "config": [] 00:13:52.165 } 00:13:52.165 ] 00:13:52.165 }' 00:13:52.165 09:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 71883 00:13:52.165 09:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 71883 ']' 00:13:52.165 09:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 71883 00:13:52.165 09:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:52.165 09:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:52.165 09:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71883 00:13:52.165 killing process with pid 71883 00:13:52.165 Received shutdown signal, test time was about 10.000000 seconds 00:13:52.165 00:13:52.165 Latency(us) 00:13:52.165 [2024-10-16T09:28:16.569Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:52.165 [2024-10-16T09:28:16.569Z] =================================================================================================================== 00:13:52.165 [2024-10-16T09:28:16.569Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:52.165 09:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:52.165 09:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:52.165 09:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71883' 00:13:52.165 09:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 71883 00:13:52.165 09:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 71883 00:13:52.165 09:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 71827 00:13:52.165 09:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 71827 ']' 00:13:52.165 09:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 71827 00:13:52.165 09:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:52.165 09:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:52.165 09:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71827 00:13:52.165 killing process with pid 71827 00:13:52.165 09:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:52.165 09:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:52.165 09:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71827' 00:13:52.165 09:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 71827 00:13:52.165 09:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 71827 00:13:52.424 09:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:13:52.424 "subsystems": [ 00:13:52.424 { 00:13:52.424 "subsystem": "keyring", 00:13:52.424 "config": [ 00:13:52.424 { 00:13:52.424 "method": "keyring_file_add_key", 00:13:52.424 "params": { 00:13:52.424 "name": "key0", 00:13:52.424 "path": "/tmp/tmp.35hJ8IbUnf" 00:13:52.424 } 00:13:52.424 } 00:13:52.424 ] 00:13:52.424 }, 00:13:52.424 { 00:13:52.424 "subsystem": "iobuf", 00:13:52.424 "config": [ 00:13:52.424 { 00:13:52.424 "method": "iobuf_set_options", 00:13:52.424 "params": { 00:13:52.424 "small_pool_count": 8192, 00:13:52.424 "large_pool_count": 1024, 00:13:52.424 "small_bufsize": 8192, 00:13:52.424 "large_bufsize": 135168 00:13:52.424 } 00:13:52.424 } 00:13:52.424 ] 00:13:52.424 }, 00:13:52.424 { 00:13:52.424 "subsystem": "sock", 00:13:52.424 "config": [ 00:13:52.424 { 00:13:52.424 "method": "sock_set_default_impl", 00:13:52.424 "params": { 00:13:52.424 "impl_name": "uring" 00:13:52.424 } 00:13:52.424 }, 00:13:52.424 { 00:13:52.424 "method": "sock_impl_set_options", 00:13:52.424 "params": { 00:13:52.424 "impl_name": "ssl", 00:13:52.424 "recv_buf_size": 4096, 00:13:52.424 "send_buf_size": 4096, 00:13:52.424 "enable_recv_pipe": true, 00:13:52.424 "enable_quickack": false, 00:13:52.424 "enable_placement_id": 0, 00:13:52.424 "enable_zerocopy_send_server": true, 00:13:52.425 "enable_zerocopy_send_client": false, 00:13:52.425 "zerocopy_threshold": 0, 00:13:52.425 "tls_version": 0, 00:13:52.425 "enable_ktls": false 00:13:52.425 } 00:13:52.425 }, 00:13:52.425 { 00:13:52.425 "method": "sock_impl_set_options", 00:13:52.425 "params": { 00:13:52.425 "impl_name": "posix", 00:13:52.425 "recv_buf_size": 2097152, 00:13:52.425 "send_buf_size": 2097152, 00:13:52.425 "enable_recv_pipe": true, 00:13:52.425 "enable_quickack": false, 00:13:52.425 "enable_placement_id": 0, 00:13:52.425 "enable_zerocopy_send_server": true, 00:13:52.425 "enable_zerocopy_send_client": false, 00:13:52.425 "zerocopy_threshold": 0, 00:13:52.425 "tls_version": 0, 00:13:52.425 "enable_ktls": false 00:13:52.425 } 00:13:52.425 }, 00:13:52.425 { 00:13:52.425 "method": "sock_impl_set_options", 00:13:52.425 "params": { 00:13:52.425 "impl_name": "uring", 00:13:52.425 "recv_buf_size": 2097152, 00:13:52.425 "send_buf_size": 2097152, 00:13:52.425 "enable_recv_pipe": true, 00:13:52.425 "enable_quickack": false, 00:13:52.425 "enable_placement_id": 0, 00:13:52.425 "enable_zerocopy_send_server": false, 00:13:52.425 "enable_zerocopy_send_client": false, 00:13:52.425 "zerocopy_threshold": 0, 00:13:52.425 "tls_version": 0, 00:13:52.425 "enable_ktls": false 00:13:52.425 } 00:13:52.425 } 00:13:52.425 ] 00:13:52.425 }, 00:13:52.425 { 00:13:52.425 "subsystem": "vmd", 00:13:52.425 "config": [] 00:13:52.425 }, 00:13:52.425 { 00:13:52.425 "subsystem": "accel", 00:13:52.425 "config": [ 00:13:52.425 { 00:13:52.425 "method": "accel_set_options", 00:13:52.425 "params": { 00:13:52.425 "small_cache_size": 128, 00:13:52.425 "large_cache_size": 16, 00:13:52.425 "task_count": 2048, 00:13:52.425 "sequence_count": 2048, 00:13:52.425 "buf_count": 2048 00:13:52.425 } 00:13:52.425 } 00:13:52.425 ] 00:13:52.425 }, 00:13:52.425 { 00:13:52.425 "subsystem": "bdev", 00:13:52.425 "config": [ 00:13:52.425 { 00:13:52.425 "method": "bdev_set_options", 00:13:52.425 "params": { 00:13:52.425 "bdev_io_pool_size": 65535, 00:13:52.425 "bdev_io_cache_size": 256, 00:13:52.425 "bdev_auto_examine": true, 00:13:52.425 "iobuf_small_cache_size": 128, 00:13:52.425 "iobuf_large_cache_size": 16 00:13:52.425 } 00:13:52.425 }, 00:13:52.425 { 00:13:52.425 "method": "bdev_raid_set_options", 00:13:52.425 "params": { 00:13:52.425 "process_window_size_kb": 1024, 00:13:52.425 "process_max_bandwidth_mb_sec": 0 00:13:52.425 } 00:13:52.425 }, 00:13:52.425 { 00:13:52.425 "method": "bdev_iscsi_set_options", 00:13:52.425 "params": { 00:13:52.425 "timeout_sec": 30 00:13:52.425 } 00:13:52.425 }, 00:13:52.425 { 00:13:52.425 "method": "bdev_nvme_set_options", 00:13:52.425 "params": { 00:13:52.425 "action_on_timeout": "none", 00:13:52.425 "timeout_us": 0, 00:13:52.425 "timeout_admin_us": 0, 00:13:52.425 "keep_alive_timeout_ms": 10000, 00:13:52.425 "arbitration_burst": 0, 00:13:52.425 "low_priority_weight": 0, 00:13:52.425 "medium_priority_weight": 0, 00:13:52.425 "high_priority_weight": 0, 00:13:52.425 "nvme_adminq_poll_period_us": 10000, 00:13:52.425 "nvme_ioq_poll_period_us": 0, 00:13:52.425 "io_queue_requests": 0, 00:13:52.425 "delay_cmd_submit": true, 00:13:52.425 "transport_retry_count": 4, 00:13:52.425 "bdev_retry_count": 3, 00:13:52.425 "transport_ack_timeout": 0, 00:13:52.425 "ctrlr_loss_timeout_sec": 0, 00:13:52.425 "reconnect_delay_sec": 0, 00:13:52.425 "fast_io_fail_timeout_sec": 0, 00:13:52.425 "disable_auto_failback": false, 00:13:52.425 "generate_uuids": false, 00:13:52.425 "transport_tos": 0, 00:13:52.425 "nvme_error_stat": false, 00:13:52.425 "rdma_srq_size": 0, 00:13:52.425 "io_path_stat": false, 00:13:52.425 "allow_accel_sequence": false, 00:13:52.425 "rdma_max_cq_size": 0, 00:13:52.425 "rdma_cm_event_timeout_ms": 0, 00:13:52.425 "dhchap_digests": [ 00:13:52.425 "sha256", 00:13:52.425 "sha384", 00:13:52.425 "sha512" 00:13:52.425 ], 00:13:52.425 "dhchap_dhgroups": [ 00:13:52.425 "null", 00:13:52.425 "ffdhe2048", 00:13:52.425 "ffdhe3072", 00:13:52.425 "ffdhe4096", 00:13:52.425 "ffdhe6144", 00:13:52.425 "ffdhe8192" 00:13:52.425 ] 00:13:52.425 } 00:13:52.425 }, 00:13:52.425 { 00:13:52.425 "method": "bdev_nvme_set_hotplug", 00:13:52.425 "params": { 00:13:52.425 "period_us": 100000, 00:13:52.425 "enable": false 00:13:52.425 } 00:13:52.425 }, 00:13:52.425 { 00:13:52.425 "method": "bdev_malloc_create", 00:13:52.425 "params": { 00:13:52.425 "name": "malloc0", 00:13:52.425 "num_blocks": 8192, 00:13:52.425 "block_size": 4096, 00:13:52.425 "physical_block_size": 4096, 00:13:52.425 "uuid": "38b57d56-3d5e-4b80-84c1-ed4719646145", 00:13:52.425 "optimal_io_boundary": 0, 00:13:52.425 "md_size": 0, 00:13:52.425 "dif_type": 0, 00:13:52.425 "dif_is_head_of_md": false, 00:13:52.425 "dif_pi_format": 0 00:13:52.425 } 00:13:52.425 }, 00:13:52.425 { 00:13:52.425 "method": "bdev_wait_for_examine" 00:13:52.425 } 00:13:52.425 ] 00:13:52.425 }, 00:13:52.425 { 00:13:52.425 "subsystem": "nbd", 00:13:52.425 "config": [] 00:13:52.425 }, 00:13:52.425 { 00:13:52.425 "subsystem": "scheduler", 00:13:52.425 "config": [ 00:13:52.425 { 00:13:52.425 "method": "framework_set_scheduler", 00:13:52.425 "params": { 00:13:52.425 "name": "static" 00:13:52.425 } 00:13:52.425 } 00:13:52.425 ] 00:13:52.425 }, 00:13:52.425 { 00:13:52.425 "subsystem": "nvmf", 00:13:52.425 "config": [ 00:13:52.425 { 00:13:52.425 "method": "nvmf_set_config", 00:13:52.425 "params": { 00:13:52.425 "discovery_filter": "match_any", 00:13:52.425 "admin_cmd_passthru": { 00:13:52.425 "identify_ctrlr": false 00:13:52.425 }, 00:13:52.425 "dhchap_digests": [ 00:13:52.425 "sha256", 00:13:52.425 "sha384", 00:13:52.425 "sha512" 00:13:52.425 ], 00:13:52.425 "dhchap_dhgroups": [ 00:13:52.425 "null", 00:13:52.425 "ffdhe2048", 00:13:52.425 "ffdhe3072", 00:13:52.425 "ffdhe4096", 00:13:52.425 "ffdhe6144", 00:13:52.425 "ffdhe8192" 00:13:52.425 ] 00:13:52.425 } 00:13:52.425 }, 00:13:52.425 { 00:13:52.425 "method": "nvmf_set_max_subsystems", 00:13:52.425 "params": { 00:13:52.425 "max_subsystems": 1024 00:13:52.425 } 00:13:52.425 }, 00:13:52.425 { 00:13:52.425 "method": "nvmf_set_crdt", 00:13:52.425 "params": { 00:13:52.425 "crdt1": 0, 00:13:52.425 "crdt2": 0, 00:13:52.425 "crdt3": 0 00:13:52.425 } 00:13:52.425 }, 00:13:52.425 { 00:13:52.425 "method": "nvmf_create_transport", 00:13:52.425 "params": { 00:13:52.425 "trtype": "TCP", 00:13:52.425 "max_queue_depth": 128, 00:13:52.425 "max_io_qpairs_per_ctrlr": 127, 00:13:52.425 "in_capsule_data_size": 4096, 00:13:52.425 "max_io_size": 131072, 00:13:52.425 "io_unit_size": 131072, 00:13:52.425 "max_aq_depth": 128, 00:13:52.425 "num_shared_buffers": 511, 00:13:52.425 "buf_cache_size": 4294967295, 00:13:52.425 "dif_insert_or_strip": false, 00:13:52.425 "zcopy": false, 00:13:52.425 "c2h_success": false, 00:13:52.425 "sock_priority": 0, 00:13:52.425 "abort_timeout_sec": 1, 00:13:52.425 "ack_timeout": 0, 00:13:52.425 "data_wr_pool_size": 0 00:13:52.425 } 00:13:52.425 }, 00:13:52.425 { 00:13:52.425 "method": "nvmf_create_subsystem", 00:13:52.425 "params": { 00:13:52.425 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:52.425 "allow_any_host": false, 00:13:52.425 "serial_number": "SPDK00000000000001", 00:13:52.425 "model_number": "SPDK bdev Controller", 00:13:52.425 "max_namespaces": 10, 00:13:52.425 "min_cntlid": 1, 00:13:52.425 "max_cntlid": 65519, 00:13:52.425 "ana_reporting": false 00:13:52.425 } 00:13:52.425 }, 00:13:52.425 { 00:13:52.425 "method": "nvmf_subsystem_add_host", 00:13:52.425 "params": { 00:13:52.425 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:52.425 "host": "nqn.2016-06.io.spdk:host1", 00:13:52.425 "psk": "key0" 00:13:52.425 } 00:13:52.425 }, 00:13:52.425 { 00:13:52.425 "method": "nvmf_subsystem_add_ns", 00:13:52.425 "params": { 00:13:52.425 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:52.426 "namespace": { 00:13:52.426 "nsid": 1, 00:13:52.426 "bdev_name": "malloc0", 00:13:52.426 "nguid": "38B57D563D5E4B8084C1ED4719646145", 00:13:52.426 "uuid": "38b57d56-3d5e-4b80-84c1-ed4719646145", 00:13:52.426 "no_auto_visible": false 00:13:52.426 } 00:13:52.426 } 00:13:52.426 }, 00:13:52.426 { 00:13:52.426 "method": "nvmf_subsystem_add_listener", 00:13:52.426 "params": { 00:13:52.426 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:52.426 "listen_address": { 00:13:52.426 "trtype": "TCP", 00:13:52.426 "adrfam": "IPv4", 00:13:52.426 "traddr": "10.0.0.3", 00:13:52.426 "trsvcid": "4420" 00:13:52.426 }, 00:13:52.426 "secure_channel": true 00:13:52.426 } 00:13:52.426 } 00:13:52.426 ] 00:13:52.426 } 00:13:52.426 ] 00:13:52.426 }' 00:13:52.426 09:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:13:52.426 09:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:52.426 09:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:52.426 09:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:52.426 09:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:13:52.426 09:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=71927 00:13:52.426 09:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 71927 00:13:52.426 09:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 71927 ']' 00:13:52.426 09:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.426 09:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:52.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.426 09:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.426 09:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:52.426 09:28:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:52.426 [2024-10-16 09:28:16.784412] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:13:52.426 [2024-10-16 09:28:16.784511] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:52.684 [2024-10-16 09:28:16.915630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.684 [2024-10-16 09:28:16.960973] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:52.685 [2024-10-16 09:28:16.961062] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:52.685 [2024-10-16 09:28:16.961089] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:52.685 [2024-10-16 09:28:16.961098] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:52.685 [2024-10-16 09:28:16.961105] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:52.685 [2024-10-16 09:28:16.961571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:52.943 [2024-10-16 09:28:17.128732] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:52.943 [2024-10-16 09:28:17.202943] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:52.943 [2024-10-16 09:28:17.234925] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:52.943 [2024-10-16 09:28:17.235161] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:53.510 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:53.510 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:53.510 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:53.510 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:53.510 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:53.510 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:53.510 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=71959 00:13:53.510 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 71959 /var/tmp/bdevperf.sock 00:13:53.510 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 71959 ']' 00:13:53.510 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:53.510 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:53.510 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:13:53.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:53.510 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:53.510 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:13:53.510 "subsystems": [ 00:13:53.510 { 00:13:53.511 "subsystem": "keyring", 00:13:53.511 "config": [ 00:13:53.511 { 00:13:53.511 "method": "keyring_file_add_key", 00:13:53.511 "params": { 00:13:53.511 "name": "key0", 00:13:53.511 "path": "/tmp/tmp.35hJ8IbUnf" 00:13:53.511 } 00:13:53.511 } 00:13:53.511 ] 00:13:53.511 }, 00:13:53.511 { 00:13:53.511 "subsystem": "iobuf", 00:13:53.511 "config": [ 00:13:53.511 { 00:13:53.511 "method": "iobuf_set_options", 00:13:53.511 "params": { 00:13:53.511 "small_pool_count": 8192, 00:13:53.511 "large_pool_count": 1024, 00:13:53.511 "small_bufsize": 8192, 00:13:53.511 "large_bufsize": 135168 00:13:53.511 } 00:13:53.511 } 00:13:53.511 ] 00:13:53.511 }, 00:13:53.511 { 00:13:53.511 "subsystem": "sock", 00:13:53.511 "config": [ 00:13:53.511 { 00:13:53.511 "method": "sock_set_default_impl", 00:13:53.511 "params": { 00:13:53.511 "impl_name": "uring" 00:13:53.511 } 00:13:53.511 }, 00:13:53.511 { 00:13:53.511 "method": "sock_impl_set_options", 00:13:53.511 "params": { 00:13:53.511 "impl_name": "ssl", 00:13:53.511 "recv_buf_size": 4096, 00:13:53.511 "send_buf_size": 4096, 00:13:53.511 "enable_recv_pipe": true, 00:13:53.511 "enable_quickack": false, 00:13:53.511 "enable_placement_id": 0, 00:13:53.511 "enable_zerocopy_send_server": true, 00:13:53.511 "enable_zerocopy_send_client": false, 00:13:53.511 "zerocopy_threshold": 0, 00:13:53.511 "tls_version": 0, 00:13:53.511 "enable_ktls": false 00:13:53.511 } 00:13:53.511 }, 00:13:53.511 { 00:13:53.511 "method": "sock_impl_set_options", 00:13:53.511 "params": { 00:13:53.511 "impl_name": "posix", 00:13:53.511 "recv_buf_size": 2097152, 00:13:53.511 "send_buf_size": 2097152, 00:13:53.511 "enable_recv_pipe": true, 00:13:53.511 "enable_quickack": false, 00:13:53.511 "enable_placement_id": 0, 00:13:53.511 "enable_zerocopy_send_server": true, 00:13:53.511 "enable_zerocopy_send_client": false, 00:13:53.511 "zerocopy_threshold": 0, 00:13:53.511 "tls_version": 0, 00:13:53.511 "enable_ktls": false 00:13:53.511 } 00:13:53.511 }, 00:13:53.511 { 00:13:53.511 "method": "sock_impl_set_options", 00:13:53.511 "params": { 00:13:53.511 "impl_name": "uring", 00:13:53.511 "recv_buf_size": 2097152, 00:13:53.511 "send_buf_size": 2097152, 00:13:53.511 "enable_recv_pipe": true, 00:13:53.511 "enable_quickack": false, 00:13:53.511 "enable_placement_id": 0, 00:13:53.511 "enable_zerocopy_send_server": false, 00:13:53.511 "enable_zerocopy_send_client": false, 00:13:53.511 "zerocopy_threshold": 0, 00:13:53.511 "tls_version": 0, 00:13:53.511 "enable_ktls": false 00:13:53.511 } 00:13:53.511 } 00:13:53.511 ] 00:13:53.511 }, 00:13:53.511 { 00:13:53.511 "subsystem": "vmd", 00:13:53.511 "config": [] 00:13:53.511 }, 00:13:53.511 { 00:13:53.511 "subsystem": "accel", 00:13:53.511 "config": [ 00:13:53.511 { 00:13:53.511 "method": "accel_set_options", 00:13:53.511 "params": { 00:13:53.511 "small_cache_size": 128, 00:13:53.511 "large_cache_size": 16, 00:13:53.511 "task_count": 2048, 00:13:53.511 "sequence_count": 2048, 00:13:53.511 "buf_count": 2048 00:13:53.511 } 00:13:53.511 } 00:13:53.511 ] 00:13:53.511 }, 00:13:53.511 { 00:13:53.511 "subsystem": "bdev", 00:13:53.511 "config": [ 00:13:53.511 { 00:13:53.511 "method": "bdev_set_options", 00:13:53.511 "params": { 00:13:53.511 "bdev_io_pool_size": 65535, 00:13:53.511 "bdev_io_cache_size": 256, 00:13:53.511 "bdev_auto_examine": true, 00:13:53.511 "iobuf_small_cache_size": 128, 00:13:53.511 "iobuf_large_cache_size": 16 00:13:53.511 } 00:13:53.511 }, 00:13:53.511 { 00:13:53.511 "method": "bdev_raid_set_options", 00:13:53.511 "params": { 00:13:53.511 "process_window_size_kb": 1024, 00:13:53.511 "process_max_bandwidth_mb_sec": 0 00:13:53.511 } 00:13:53.511 }, 00:13:53.511 { 00:13:53.511 "method": "bdev_iscsi_set_options", 00:13:53.511 "params": { 00:13:53.511 "timeout_sec": 30 00:13:53.511 } 00:13:53.511 }, 00:13:53.511 { 00:13:53.511 "method": "bdev_nvme_set_options", 00:13:53.511 "params": { 00:13:53.511 "action_on_timeout": "none", 00:13:53.511 "timeout_us": 0, 00:13:53.511 "timeout_admin_us": 0, 00:13:53.511 "keep_alive_timeout_ms": 10000, 00:13:53.511 "arbitration_burst": 0, 00:13:53.511 "low_priority_weight": 0, 00:13:53.511 "medium_priority_weight": 0, 00:13:53.511 "high_priority_weight": 0, 00:13:53.511 "nvme_adminq_poll_period_us": 10000, 00:13:53.511 "nvme_ioq_poll_period_us": 0, 00:13:53.511 "io_queue_requests": 512, 00:13:53.511 "delay_cmd_submit": true, 00:13:53.511 "transport_retry_count": 4, 00:13:53.511 "bdev_retry_count": 3, 00:13:53.511 "transport_ack_timeout": 0, 00:13:53.511 "ctrlr_loss_timeout_sec": 0, 00:13:53.511 "reconnect_delay_sec": 0, 00:13:53.511 "fast_io_fail_timeout_sec": 0, 00:13:53.511 "disable_auto_failback": false, 00:13:53.511 "generate_uuids": false, 00:13:53.511 "transport_tos": 0, 00:13:53.511 "nvme_error_stat": false, 00:13:53.511 "rdma_srq_size": 0, 00:13:53.511 "io_path_stat": false, 00:13:53.511 "allow_accel_sequence": false, 00:13:53.511 "rdma_max_cq_size": 0, 00:13:53.511 "rdma_cm_event_timeout_ms": 0, 00:13:53.511 "dhchap_digests": [ 00:13:53.511 "sha256", 00:13:53.511 "sha384", 00:13:53.511 "sha512" 00:13:53.511 ], 00:13:53.511 "dhchap_dhgroups": [ 00:13:53.511 "null", 00:13:53.511 "ffdhe2048", 00:13:53.511 "ffdhe3072", 00:13:53.511 "ffdhe4096", 00:13:53.511 "ffdhe6144", 00:13:53.511 "ffdhe8192" 00:13:53.511 ] 00:13:53.511 } 00:13:53.511 }, 00:13:53.511 { 00:13:53.511 "method": "bdev_nvme_attach_controller", 00:13:53.511 "params": { 00:13:53.511 "name": "TLSTEST", 00:13:53.511 "trtype": "TCP", 00:13:53.511 "adrfam": "IPv4", 00:13:53.511 "traddr": "10.0.0.3", 00:13:53.511 "trsvcid": "4420", 00:13:53.511 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:53.511 "prchk_reftag": false, 00:13:53.511 "prchk_guard": false, 00:13:53.511 "ctrlr_loss_timeout_sec": 0, 00:13:53.511 "reconnect_delay_sec": 0, 00:13:53.511 "fast_io_fail_timeout_sec": 0, 00:13:53.511 "psk": "key0", 00:13:53.511 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:53.511 "hdgst": false, 00:13:53.511 "ddgst": false, 00:13:53.511 "multipath": "multipath" 00:13:53.511 } 00:13:53.511 }, 00:13:53.511 { 00:13:53.511 "method": "bdev_nvme_set_hotplug", 00:13:53.511 "params": { 00:13:53.511 "period_us": 100000, 00:13:53.511 "enable": false 00:13:53.511 } 00:13:53.511 }, 00:13:53.511 { 00:13:53.511 "method": "bdev_wait_for_examine" 00:13:53.511 } 00:13:53.511 ] 00:13:53.511 }, 00:13:53.511 { 00:13:53.511 "subsystem": "nbd", 00:13:53.511 "config": [] 00:13:53.511 } 00:13:53.511 ] 00:13:53.511 }' 00:13:53.511 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:53.511 09:28:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:53.511 [2024-10-16 09:28:17.881566] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:13:53.511 [2024-10-16 09:28:17.881677] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71959 ] 00:13:53.771 [2024-10-16 09:28:18.019902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.771 [2024-10-16 09:28:18.078138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:54.029 [2024-10-16 09:28:18.212771] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:54.029 [2024-10-16 09:28:18.260052] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:54.596 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:54.596 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:54.596 09:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:54.596 Running I/O for 10 seconds... 00:13:56.935 4736.00 IOPS, 18.50 MiB/s [2024-10-16T09:28:22.275Z] 4736.00 IOPS, 18.50 MiB/s [2024-10-16T09:28:23.212Z] 4760.67 IOPS, 18.60 MiB/s [2024-10-16T09:28:24.148Z] 4768.25 IOPS, 18.63 MiB/s [2024-10-16T09:28:25.084Z] 4769.60 IOPS, 18.63 MiB/s [2024-10-16T09:28:26.020Z] 4771.17 IOPS, 18.64 MiB/s [2024-10-16T09:28:26.956Z] 4780.43 IOPS, 18.67 MiB/s [2024-10-16T09:28:28.332Z] 4783.12 IOPS, 18.68 MiB/s [2024-10-16T09:28:29.294Z] 4786.44 IOPS, 18.70 MiB/s [2024-10-16T09:28:29.294Z] 4789.60 IOPS, 18.71 MiB/s 00:14:04.890 Latency(us) 00:14:04.890 [2024-10-16T09:28:29.294Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:04.890 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:04.890 Verification LBA range: start 0x0 length 0x2000 00:14:04.890 TLSTESTn1 : 10.01 4795.71 18.73 0.00 0.00 26647.25 4408.79 20614.05 00:14:04.890 [2024-10-16T09:28:29.294Z] =================================================================================================================== 00:14:04.890 [2024-10-16T09:28:29.294Z] Total : 4795.71 18.73 0.00 0.00 26647.25 4408.79 20614.05 00:14:04.890 { 00:14:04.890 "results": [ 00:14:04.890 { 00:14:04.890 "job": "TLSTESTn1", 00:14:04.890 "core_mask": "0x4", 00:14:04.890 "workload": "verify", 00:14:04.890 "status": "finished", 00:14:04.890 "verify_range": { 00:14:04.890 "start": 0, 00:14:04.890 "length": 8192 00:14:04.890 }, 00:14:04.890 "queue_depth": 128, 00:14:04.890 "io_size": 4096, 00:14:04.890 "runtime": 10.013539, 00:14:04.890 "iops": 4795.707092167914, 00:14:04.890 "mibps": 18.733230828780915, 00:14:04.890 "io_failed": 0, 00:14:04.890 "io_timeout": 0, 00:14:04.890 "avg_latency_us": 26647.248108253418, 00:14:04.890 "min_latency_us": 4408.785454545455, 00:14:04.890 "max_latency_us": 20614.05090909091 00:14:04.890 } 00:14:04.890 ], 00:14:04.890 "core_count": 1 00:14:04.890 } 00:14:04.890 09:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:04.890 09:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 71959 00:14:04.890 09:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 71959 ']' 00:14:04.890 09:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 71959 00:14:04.890 09:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:04.890 09:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:04.890 09:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71959 00:14:04.890 killing process with pid 71959 00:14:04.890 Received shutdown signal, test time was about 10.000000 seconds 00:14:04.890 00:14:04.890 Latency(us) 00:14:04.890 [2024-10-16T09:28:29.294Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:04.890 [2024-10-16T09:28:29.294Z] =================================================================================================================== 00:14:04.890 [2024-10-16T09:28:29.294Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:04.890 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:04.890 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:04.890 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71959' 00:14:04.890 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 71959 00:14:04.890 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 71959 00:14:04.890 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 71927 00:14:04.890 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 71927 ']' 00:14:04.890 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 71927 00:14:04.890 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:04.890 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:04.890 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71927 00:14:04.890 killing process with pid 71927 00:14:04.890 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:04.890 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:04.890 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71927' 00:14:04.890 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 71927 00:14:04.890 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 71927 00:14:05.150 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:14:05.150 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:05.150 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:05.150 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:05.150 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=72102 00:14:05.150 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:05.150 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 72102 00:14:05.150 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72102 ']' 00:14:05.150 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.150 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:05.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.150 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.150 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:05.150 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:05.150 [2024-10-16 09:28:29.481423] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:14:05.150 [2024-10-16 09:28:29.481518] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:05.409 [2024-10-16 09:28:29.623700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.409 [2024-10-16 09:28:29.675624] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:05.409 [2024-10-16 09:28:29.675687] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:05.409 [2024-10-16 09:28:29.675701] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:05.409 [2024-10-16 09:28:29.675713] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:05.409 [2024-10-16 09:28:29.675722] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:05.409 [2024-10-16 09:28:29.676160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.409 [2024-10-16 09:28:29.732941] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:05.409 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:05.409 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:05.409 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:05.409 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:05.409 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:05.668 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:05.668 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.35hJ8IbUnf 00:14:05.668 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.35hJ8IbUnf 00:14:05.668 09:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:05.668 [2024-10-16 09:28:30.034224] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:05.668 09:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:06.235 09:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:06.235 [2024-10-16 09:28:30.534322] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:06.235 [2024-10-16 09:28:30.534537] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:06.235 09:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:06.493 malloc0 00:14:06.493 09:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:06.751 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.35hJ8IbUnf 00:14:07.008 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:07.267 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=72146 00:14:07.267 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:07.267 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:07.267 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 72146 /var/tmp/bdevperf.sock 00:14:07.267 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72146 ']' 00:14:07.267 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:07.267 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:07.267 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:07.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:07.267 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:07.267 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:07.267 [2024-10-16 09:28:31.577606] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:14:07.267 [2024-10-16 09:28:31.577899] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72146 ] 00:14:07.526 [2024-10-16 09:28:31.716723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.526 [2024-10-16 09:28:31.764237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:07.526 [2024-10-16 09:28:31.817505] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:07.526 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:07.526 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:07.526 09:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.35hJ8IbUnf 00:14:07.784 09:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:08.043 [2024-10-16 09:28:32.309815] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:08.043 nvme0n1 00:14:08.043 09:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:08.303 Running I/O for 1 seconds... 00:14:09.241 4608.00 IOPS, 18.00 MiB/s 00:14:09.241 Latency(us) 00:14:09.241 [2024-10-16T09:28:33.645Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:09.241 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:09.241 Verification LBA range: start 0x0 length 0x2000 00:14:09.241 nvme0n1 : 1.02 4660.03 18.20 0.00 0.00 27209.06 10068.71 20852.36 00:14:09.241 [2024-10-16T09:28:33.645Z] =================================================================================================================== 00:14:09.241 [2024-10-16T09:28:33.645Z] Total : 4660.03 18.20 0.00 0.00 27209.06 10068.71 20852.36 00:14:09.241 { 00:14:09.241 "results": [ 00:14:09.241 { 00:14:09.241 "job": "nvme0n1", 00:14:09.241 "core_mask": "0x2", 00:14:09.241 "workload": "verify", 00:14:09.241 "status": "finished", 00:14:09.241 "verify_range": { 00:14:09.241 "start": 0, 00:14:09.241 "length": 8192 00:14:09.241 }, 00:14:09.241 "queue_depth": 128, 00:14:09.241 "io_size": 4096, 00:14:09.241 "runtime": 1.016303, 00:14:09.241 "iops": 4660.027570517847, 00:14:09.241 "mibps": 18.20323269733534, 00:14:09.241 "io_failed": 0, 00:14:09.241 "io_timeout": 0, 00:14:09.241 "avg_latency_us": 27209.057493857494, 00:14:09.241 "min_latency_us": 10068.712727272727, 00:14:09.241 "max_latency_us": 20852.363636363636 00:14:09.241 } 00:14:09.241 ], 00:14:09.241 "core_count": 1 00:14:09.241 } 00:14:09.241 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 72146 00:14:09.241 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72146 ']' 00:14:09.241 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72146 00:14:09.241 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:09.241 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:09.241 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72146 00:14:09.241 killing process with pid 72146 00:14:09.241 Received shutdown signal, test time was about 1.000000 seconds 00:14:09.241 00:14:09.242 Latency(us) 00:14:09.242 [2024-10-16T09:28:33.646Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:09.242 [2024-10-16T09:28:33.646Z] =================================================================================================================== 00:14:09.242 [2024-10-16T09:28:33.646Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:09.242 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:09.242 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:09.242 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72146' 00:14:09.242 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72146 00:14:09.242 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72146 00:14:09.501 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 72102 00:14:09.501 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72102 ']' 00:14:09.501 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72102 00:14:09.501 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:09.501 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:09.501 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72102 00:14:09.501 killing process with pid 72102 00:14:09.501 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:09.501 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:09.501 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72102' 00:14:09.501 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72102 00:14:09.501 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72102 00:14:09.760 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:14:09.760 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:09.760 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:09.760 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:09.760 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=72188 00:14:09.760 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:09.760 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 72188 00:14:09.760 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72188 ']' 00:14:09.760 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.760 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:09.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.760 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.760 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:09.760 09:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:09.760 [2024-10-16 09:28:34.017259] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:14:09.760 [2024-10-16 09:28:34.017473] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:09.760 [2024-10-16 09:28:34.149131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.019 [2024-10-16 09:28:34.190706] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:10.019 [2024-10-16 09:28:34.190760] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:10.019 [2024-10-16 09:28:34.190786] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:10.019 [2024-10-16 09:28:34.190793] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:10.019 [2024-10-16 09:28:34.190799] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:10.019 [2024-10-16 09:28:34.191124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.019 [2024-10-16 09:28:34.242079] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:10.019 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:10.019 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:10.019 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:10.019 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:10.019 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:10.019 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:10.019 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:14:10.019 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.019 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:10.019 [2024-10-16 09:28:34.345814] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:10.019 malloc0 00:14:10.019 [2024-10-16 09:28:34.376214] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:10.019 [2024-10-16 09:28:34.376418] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:10.019 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.019 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=72208 00:14:10.019 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:10.019 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 72208 /var/tmp/bdevperf.sock 00:14:10.019 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72208 ']' 00:14:10.019 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:10.019 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:10.019 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:10.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:10.019 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:10.019 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:10.278 [2024-10-16 09:28:34.451272] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:14:10.278 [2024-10-16 09:28:34.451515] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72208 ] 00:14:10.278 [2024-10-16 09:28:34.583825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.278 [2024-10-16 09:28:34.629773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:10.278 [2024-10-16 09:28:34.682131] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:10.537 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:10.537 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:10.537 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.35hJ8IbUnf 00:14:10.796 09:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:10.796 [2024-10-16 09:28:35.179297] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:11.055 nvme0n1 00:14:11.055 09:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:11.055 Running I/O for 1 seconds... 00:14:12.263 4547.00 IOPS, 17.76 MiB/s 00:14:12.263 Latency(us) 00:14:12.263 [2024-10-16T09:28:36.667Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:12.263 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:12.263 Verification LBA range: start 0x0 length 0x2000 00:14:12.263 nvme0n1 : 1.02 4574.00 17.87 0.00 0.00 27629.75 7179.17 22639.71 00:14:12.263 [2024-10-16T09:28:36.667Z] =================================================================================================================== 00:14:12.263 [2024-10-16T09:28:36.667Z] Total : 4574.00 17.87 0.00 0.00 27629.75 7179.17 22639.71 00:14:12.263 { 00:14:12.263 "results": [ 00:14:12.263 { 00:14:12.263 "job": "nvme0n1", 00:14:12.263 "core_mask": "0x2", 00:14:12.263 "workload": "verify", 00:14:12.263 "status": "finished", 00:14:12.263 "verify_range": { 00:14:12.263 "start": 0, 00:14:12.263 "length": 8192 00:14:12.263 }, 00:14:12.263 "queue_depth": 128, 00:14:12.263 "io_size": 4096, 00:14:12.263 "runtime": 1.022082, 00:14:12.263 "iops": 4573.996998283895, 00:14:12.263 "mibps": 17.867175774546464, 00:14:12.263 "io_failed": 0, 00:14:12.263 "io_timeout": 0, 00:14:12.263 "avg_latency_us": 27629.745022070976, 00:14:12.263 "min_latency_us": 7179.170909090909, 00:14:12.263 "max_latency_us": 22639.70909090909 00:14:12.263 } 00:14:12.263 ], 00:14:12.263 "core_count": 1 00:14:12.263 } 00:14:12.263 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:14:12.263 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.263 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:12.263 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.263 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:14:12.263 "subsystems": [ 00:14:12.263 { 00:14:12.263 "subsystem": "keyring", 00:14:12.263 "config": [ 00:14:12.263 { 00:14:12.263 "method": "keyring_file_add_key", 00:14:12.263 "params": { 00:14:12.263 "name": "key0", 00:14:12.263 "path": "/tmp/tmp.35hJ8IbUnf" 00:14:12.263 } 00:14:12.263 } 00:14:12.263 ] 00:14:12.263 }, 00:14:12.263 { 00:14:12.263 "subsystem": "iobuf", 00:14:12.263 "config": [ 00:14:12.263 { 00:14:12.263 "method": "iobuf_set_options", 00:14:12.263 "params": { 00:14:12.263 "small_pool_count": 8192, 00:14:12.263 "large_pool_count": 1024, 00:14:12.263 "small_bufsize": 8192, 00:14:12.263 "large_bufsize": 135168 00:14:12.263 } 00:14:12.263 } 00:14:12.263 ] 00:14:12.263 }, 00:14:12.263 { 00:14:12.263 "subsystem": "sock", 00:14:12.263 "config": [ 00:14:12.263 { 00:14:12.263 "method": "sock_set_default_impl", 00:14:12.263 "params": { 00:14:12.263 "impl_name": "uring" 00:14:12.263 } 00:14:12.264 }, 00:14:12.264 { 00:14:12.264 "method": "sock_impl_set_options", 00:14:12.264 "params": { 00:14:12.264 "impl_name": "ssl", 00:14:12.264 "recv_buf_size": 4096, 00:14:12.264 "send_buf_size": 4096, 00:14:12.264 "enable_recv_pipe": true, 00:14:12.264 "enable_quickack": false, 00:14:12.264 "enable_placement_id": 0, 00:14:12.264 "enable_zerocopy_send_server": true, 00:14:12.264 "enable_zerocopy_send_client": false, 00:14:12.264 "zerocopy_threshold": 0, 00:14:12.264 "tls_version": 0, 00:14:12.264 "enable_ktls": false 00:14:12.264 } 00:14:12.264 }, 00:14:12.264 { 00:14:12.264 "method": "sock_impl_set_options", 00:14:12.264 "params": { 00:14:12.264 "impl_name": "posix", 00:14:12.264 "recv_buf_size": 2097152, 00:14:12.264 "send_buf_size": 2097152, 00:14:12.264 "enable_recv_pipe": true, 00:14:12.264 "enable_quickack": false, 00:14:12.264 "enable_placement_id": 0, 00:14:12.264 "enable_zerocopy_send_server": true, 00:14:12.264 "enable_zerocopy_send_client": false, 00:14:12.264 "zerocopy_threshold": 0, 00:14:12.264 "tls_version": 0, 00:14:12.264 "enable_ktls": false 00:14:12.264 } 00:14:12.264 }, 00:14:12.264 { 00:14:12.264 "method": "sock_impl_set_options", 00:14:12.264 "params": { 00:14:12.264 "impl_name": "uring", 00:14:12.264 "recv_buf_size": 2097152, 00:14:12.264 "send_buf_size": 2097152, 00:14:12.264 "enable_recv_pipe": true, 00:14:12.264 "enable_quickack": false, 00:14:12.264 "enable_placement_id": 0, 00:14:12.264 "enable_zerocopy_send_server": false, 00:14:12.264 "enable_zerocopy_send_client": false, 00:14:12.264 "zerocopy_threshold": 0, 00:14:12.264 "tls_version": 0, 00:14:12.264 "enable_ktls": false 00:14:12.264 } 00:14:12.264 } 00:14:12.264 ] 00:14:12.264 }, 00:14:12.264 { 00:14:12.264 "subsystem": "vmd", 00:14:12.264 "config": [] 00:14:12.264 }, 00:14:12.264 { 00:14:12.264 "subsystem": "accel", 00:14:12.264 "config": [ 00:14:12.264 { 00:14:12.264 "method": "accel_set_options", 00:14:12.264 "params": { 00:14:12.264 "small_cache_size": 128, 00:14:12.264 "large_cache_size": 16, 00:14:12.264 "task_count": 2048, 00:14:12.264 "sequence_count": 2048, 00:14:12.264 "buf_count": 2048 00:14:12.264 } 00:14:12.264 } 00:14:12.264 ] 00:14:12.264 }, 00:14:12.264 { 00:14:12.264 "subsystem": "bdev", 00:14:12.264 "config": [ 00:14:12.264 { 00:14:12.264 "method": "bdev_set_options", 00:14:12.264 "params": { 00:14:12.264 "bdev_io_pool_size": 65535, 00:14:12.264 "bdev_io_cache_size": 256, 00:14:12.264 "bdev_auto_examine": true, 00:14:12.264 "iobuf_small_cache_size": 128, 00:14:12.264 "iobuf_large_cache_size": 16 00:14:12.264 } 00:14:12.264 }, 00:14:12.264 { 00:14:12.264 "method": "bdev_raid_set_options", 00:14:12.264 "params": { 00:14:12.264 "process_window_size_kb": 1024, 00:14:12.264 "process_max_bandwidth_mb_sec": 0 00:14:12.264 } 00:14:12.264 }, 00:14:12.264 { 00:14:12.264 "method": "bdev_iscsi_set_options", 00:14:12.264 "params": { 00:14:12.264 "timeout_sec": 30 00:14:12.264 } 00:14:12.264 }, 00:14:12.264 { 00:14:12.264 "method": "bdev_nvme_set_options", 00:14:12.264 "params": { 00:14:12.264 "action_on_timeout": "none", 00:14:12.264 "timeout_us": 0, 00:14:12.264 "timeout_admin_us": 0, 00:14:12.264 "keep_alive_timeout_ms": 10000, 00:14:12.264 "arbitration_burst": 0, 00:14:12.264 "low_priority_weight": 0, 00:14:12.264 "medium_priority_weight": 0, 00:14:12.264 "high_priority_weight": 0, 00:14:12.264 "nvme_adminq_poll_period_us": 10000, 00:14:12.264 "nvme_ioq_poll_period_us": 0, 00:14:12.264 "io_queue_requests": 0, 00:14:12.264 "delay_cmd_submit": true, 00:14:12.264 "transport_retry_count": 4, 00:14:12.264 "bdev_retry_count": 3, 00:14:12.264 "transport_ack_timeout": 0, 00:14:12.264 "ctrlr_loss_timeout_sec": 0, 00:14:12.264 "reconnect_delay_sec": 0, 00:14:12.264 "fast_io_fail_timeout_sec": 0, 00:14:12.264 "disable_auto_failback": false, 00:14:12.264 "generate_uuids": false, 00:14:12.264 "transport_tos": 0, 00:14:12.264 "nvme_error_stat": false, 00:14:12.264 "rdma_srq_size": 0, 00:14:12.264 "io_path_stat": false, 00:14:12.264 "allow_accel_sequence": false, 00:14:12.264 "rdma_max_cq_size": 0, 00:14:12.264 "rdma_cm_event_timeout_ms": 0, 00:14:12.264 "dhchap_digests": [ 00:14:12.264 "sha256", 00:14:12.264 "sha384", 00:14:12.264 "sha512" 00:14:12.264 ], 00:14:12.264 "dhchap_dhgroups": [ 00:14:12.264 "null", 00:14:12.264 "ffdhe2048", 00:14:12.264 "ffdhe3072", 00:14:12.264 "ffdhe4096", 00:14:12.264 "ffdhe6144", 00:14:12.264 "ffdhe8192" 00:14:12.264 ] 00:14:12.264 } 00:14:12.264 }, 00:14:12.264 { 00:14:12.264 "method": "bdev_nvme_set_hotplug", 00:14:12.264 "params": { 00:14:12.264 "period_us": 100000, 00:14:12.264 "enable": false 00:14:12.264 } 00:14:12.264 }, 00:14:12.264 { 00:14:12.264 "method": "bdev_malloc_create", 00:14:12.264 "params": { 00:14:12.264 "name": "malloc0", 00:14:12.264 "num_blocks": 8192, 00:14:12.264 "block_size": 4096, 00:14:12.264 "physical_block_size": 4096, 00:14:12.264 "uuid": "82e22ed0-a88a-4941-aace-629d30aafc55", 00:14:12.264 "optimal_io_boundary": 0, 00:14:12.264 "md_size": 0, 00:14:12.264 "dif_type": 0, 00:14:12.264 "dif_is_head_of_md": false, 00:14:12.264 "dif_pi_format": 0 00:14:12.264 } 00:14:12.264 }, 00:14:12.264 { 00:14:12.264 "method": "bdev_wait_for_examine" 00:14:12.264 } 00:14:12.264 ] 00:14:12.264 }, 00:14:12.264 { 00:14:12.264 "subsystem": "nbd", 00:14:12.264 "config": [] 00:14:12.264 }, 00:14:12.264 { 00:14:12.264 "subsystem": "scheduler", 00:14:12.264 "config": [ 00:14:12.264 { 00:14:12.264 "method": "framework_set_scheduler", 00:14:12.264 "params": { 00:14:12.264 "name": "static" 00:14:12.264 } 00:14:12.264 } 00:14:12.264 ] 00:14:12.264 }, 00:14:12.264 { 00:14:12.264 "subsystem": "nvmf", 00:14:12.264 "config": [ 00:14:12.264 { 00:14:12.264 "method": "nvmf_set_config", 00:14:12.264 "params": { 00:14:12.264 "discovery_filter": "match_any", 00:14:12.264 "admin_cmd_passthru": { 00:14:12.264 "identify_ctrlr": false 00:14:12.264 }, 00:14:12.264 "dhchap_digests": [ 00:14:12.264 "sha256", 00:14:12.264 "sha384", 00:14:12.264 "sha512" 00:14:12.264 ], 00:14:12.264 "dhchap_dhgroups": [ 00:14:12.264 "null", 00:14:12.264 "ffdhe2048", 00:14:12.264 "ffdhe3072", 00:14:12.264 "ffdhe4096", 00:14:12.264 "ffdhe6144", 00:14:12.264 "ffdhe8192" 00:14:12.264 ] 00:14:12.264 } 00:14:12.264 }, 00:14:12.264 { 00:14:12.264 "method": "nvmf_set_max_subsystems", 00:14:12.264 "params": { 00:14:12.264 "max_subsystems": 1024 00:14:12.264 } 00:14:12.264 }, 00:14:12.264 { 00:14:12.264 "method": "nvmf_set_crdt", 00:14:12.264 "params": { 00:14:12.264 "crdt1": 0, 00:14:12.264 "crdt2": 0, 00:14:12.264 "crdt3": 0 00:14:12.264 } 00:14:12.264 }, 00:14:12.264 { 00:14:12.264 "method": "nvmf_create_transport", 00:14:12.264 "params": { 00:14:12.264 "trtype": "TCP", 00:14:12.264 "max_queue_depth": 128, 00:14:12.264 "max_io_qpairs_per_ctrlr": 127, 00:14:12.264 "in_capsule_data_size": 4096, 00:14:12.264 "max_io_size": 131072, 00:14:12.264 "io_unit_size": 131072, 00:14:12.264 "max_aq_depth": 128, 00:14:12.264 "num_shared_buffers": 511, 00:14:12.264 "buf_cache_size": 4294967295, 00:14:12.264 "dif_insert_or_strip": false, 00:14:12.264 "zcopy": false, 00:14:12.264 "c2h_success": false, 00:14:12.264 "sock_priority": 0, 00:14:12.264 "abort_timeout_sec": 1, 00:14:12.264 "ack_timeout": 0, 00:14:12.264 "data_wr_pool_size": 0 00:14:12.264 } 00:14:12.264 }, 00:14:12.264 { 00:14:12.264 "method": "nvmf_create_subsystem", 00:14:12.264 "params": { 00:14:12.264 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:12.264 "allow_any_host": false, 00:14:12.264 "serial_number": "00000000000000000000", 00:14:12.264 "model_number": "SPDK bdev Controller", 00:14:12.264 "max_namespaces": 32, 00:14:12.264 "min_cntlid": 1, 00:14:12.264 "max_cntlid": 65519, 00:14:12.264 "ana_reporting": false 00:14:12.264 } 00:14:12.264 }, 00:14:12.264 { 00:14:12.264 "method": "nvmf_subsystem_add_host", 00:14:12.264 "params": { 00:14:12.264 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:12.264 "host": "nqn.2016-06.io.spdk:host1", 00:14:12.264 "psk": "key0" 00:14:12.264 } 00:14:12.264 }, 00:14:12.264 { 00:14:12.264 "method": "nvmf_subsystem_add_ns", 00:14:12.264 "params": { 00:14:12.264 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:12.264 "namespace": { 00:14:12.264 "nsid": 1, 00:14:12.264 "bdev_name": "malloc0", 00:14:12.264 "nguid": "82E22ED0A88A4941AACE629D30AAFC55", 00:14:12.264 "uuid": "82e22ed0-a88a-4941-aace-629d30aafc55", 00:14:12.264 "no_auto_visible": false 00:14:12.265 } 00:14:12.265 } 00:14:12.265 }, 00:14:12.265 { 00:14:12.265 "method": "nvmf_subsystem_add_listener", 00:14:12.265 "params": { 00:14:12.265 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:12.265 "listen_address": { 00:14:12.265 "trtype": "TCP", 00:14:12.265 "adrfam": "IPv4", 00:14:12.265 "traddr": "10.0.0.3", 00:14:12.265 "trsvcid": "4420" 00:14:12.265 }, 00:14:12.265 "secure_channel": false, 00:14:12.265 "sock_impl": "ssl" 00:14:12.265 } 00:14:12.265 } 00:14:12.265 ] 00:14:12.265 } 00:14:12.265 ] 00:14:12.265 }' 00:14:12.265 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:12.524 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:14:12.524 "subsystems": [ 00:14:12.524 { 00:14:12.524 "subsystem": "keyring", 00:14:12.524 "config": [ 00:14:12.524 { 00:14:12.524 "method": "keyring_file_add_key", 00:14:12.524 "params": { 00:14:12.524 "name": "key0", 00:14:12.524 "path": "/tmp/tmp.35hJ8IbUnf" 00:14:12.524 } 00:14:12.524 } 00:14:12.524 ] 00:14:12.524 }, 00:14:12.524 { 00:14:12.524 "subsystem": "iobuf", 00:14:12.524 "config": [ 00:14:12.524 { 00:14:12.524 "method": "iobuf_set_options", 00:14:12.524 "params": { 00:14:12.524 "small_pool_count": 8192, 00:14:12.524 "large_pool_count": 1024, 00:14:12.524 "small_bufsize": 8192, 00:14:12.524 "large_bufsize": 135168 00:14:12.524 } 00:14:12.524 } 00:14:12.524 ] 00:14:12.524 }, 00:14:12.524 { 00:14:12.524 "subsystem": "sock", 00:14:12.524 "config": [ 00:14:12.524 { 00:14:12.524 "method": "sock_set_default_impl", 00:14:12.524 "params": { 00:14:12.524 "impl_name": "uring" 00:14:12.524 } 00:14:12.524 }, 00:14:12.524 { 00:14:12.524 "method": "sock_impl_set_options", 00:14:12.524 "params": { 00:14:12.524 "impl_name": "ssl", 00:14:12.524 "recv_buf_size": 4096, 00:14:12.524 "send_buf_size": 4096, 00:14:12.524 "enable_recv_pipe": true, 00:14:12.524 "enable_quickack": false, 00:14:12.524 "enable_placement_id": 0, 00:14:12.524 "enable_zerocopy_send_server": true, 00:14:12.524 "enable_zerocopy_send_client": false, 00:14:12.524 "zerocopy_threshold": 0, 00:14:12.524 "tls_version": 0, 00:14:12.524 "enable_ktls": false 00:14:12.524 } 00:14:12.524 }, 00:14:12.524 { 00:14:12.524 "method": "sock_impl_set_options", 00:14:12.524 "params": { 00:14:12.524 "impl_name": "posix", 00:14:12.524 "recv_buf_size": 2097152, 00:14:12.524 "send_buf_size": 2097152, 00:14:12.524 "enable_recv_pipe": true, 00:14:12.524 "enable_quickack": false, 00:14:12.524 "enable_placement_id": 0, 00:14:12.524 "enable_zerocopy_send_server": true, 00:14:12.524 "enable_zerocopy_send_client": false, 00:14:12.524 "zerocopy_threshold": 0, 00:14:12.524 "tls_version": 0, 00:14:12.524 "enable_ktls": false 00:14:12.524 } 00:14:12.524 }, 00:14:12.524 { 00:14:12.524 "method": "sock_impl_set_options", 00:14:12.524 "params": { 00:14:12.524 "impl_name": "uring", 00:14:12.524 "recv_buf_size": 2097152, 00:14:12.524 "send_buf_size": 2097152, 00:14:12.524 "enable_recv_pipe": true, 00:14:12.524 "enable_quickack": false, 00:14:12.524 "enable_placement_id": 0, 00:14:12.524 "enable_zerocopy_send_server": false, 00:14:12.524 "enable_zerocopy_send_client": false, 00:14:12.524 "zerocopy_threshold": 0, 00:14:12.524 "tls_version": 0, 00:14:12.524 "enable_ktls": false 00:14:12.524 } 00:14:12.524 } 00:14:12.524 ] 00:14:12.524 }, 00:14:12.524 { 00:14:12.524 "subsystem": "vmd", 00:14:12.524 "config": [] 00:14:12.524 }, 00:14:12.524 { 00:14:12.524 "subsystem": "accel", 00:14:12.524 "config": [ 00:14:12.524 { 00:14:12.524 "method": "accel_set_options", 00:14:12.524 "params": { 00:14:12.524 "small_cache_size": 128, 00:14:12.524 "large_cache_size": 16, 00:14:12.524 "task_count": 2048, 00:14:12.524 "sequence_count": 2048, 00:14:12.524 "buf_count": 2048 00:14:12.524 } 00:14:12.524 } 00:14:12.524 ] 00:14:12.524 }, 00:14:12.524 { 00:14:12.524 "subsystem": "bdev", 00:14:12.524 "config": [ 00:14:12.524 { 00:14:12.524 "method": "bdev_set_options", 00:14:12.524 "params": { 00:14:12.524 "bdev_io_pool_size": 65535, 00:14:12.524 "bdev_io_cache_size": 256, 00:14:12.524 "bdev_auto_examine": true, 00:14:12.524 "iobuf_small_cache_size": 128, 00:14:12.524 "iobuf_large_cache_size": 16 00:14:12.524 } 00:14:12.524 }, 00:14:12.524 { 00:14:12.524 "method": "bdev_raid_set_options", 00:14:12.524 "params": { 00:14:12.524 "process_window_size_kb": 1024, 00:14:12.524 "process_max_bandwidth_mb_sec": 0 00:14:12.524 } 00:14:12.524 }, 00:14:12.524 { 00:14:12.524 "method": "bdev_iscsi_set_options", 00:14:12.524 "params": { 00:14:12.524 "timeout_sec": 30 00:14:12.524 } 00:14:12.524 }, 00:14:12.524 { 00:14:12.524 "method": "bdev_nvme_set_options", 00:14:12.524 "params": { 00:14:12.524 "action_on_timeout": "none", 00:14:12.524 "timeout_us": 0, 00:14:12.524 "timeout_admin_us": 0, 00:14:12.524 "keep_alive_timeout_ms": 10000, 00:14:12.524 "arbitration_burst": 0, 00:14:12.524 "low_priority_weight": 0, 00:14:12.524 "medium_priority_weight": 0, 00:14:12.524 "high_priority_weight": 0, 00:14:12.524 "nvme_adminq_poll_period_us": 10000, 00:14:12.524 "nvme_ioq_poll_period_us": 0, 00:14:12.524 "io_queue_requests": 512, 00:14:12.524 "delay_cmd_submit": true, 00:14:12.524 "transport_retry_count": 4, 00:14:12.524 "bdev_retry_count": 3, 00:14:12.524 "transport_ack_timeout": 0, 00:14:12.524 "ctrlr_loss_timeout_sec": 0, 00:14:12.524 "reconnect_delay_sec": 0, 00:14:12.524 "fast_io_fail_timeout_sec": 0, 00:14:12.524 "disable_auto_failback": false, 00:14:12.524 "generate_uuids": false, 00:14:12.524 "transport_tos": 0, 00:14:12.524 "nvme_error_stat": false, 00:14:12.524 "rdma_srq_size": 0, 00:14:12.524 "io_path_stat": false, 00:14:12.524 "allow_accel_sequence": false, 00:14:12.524 "rdma_max_cq_size": 0, 00:14:12.524 "rdma_cm_event_timeout_ms": 0, 00:14:12.524 "dhchap_digests": [ 00:14:12.524 "sha256", 00:14:12.524 "sha384", 00:14:12.524 "sha512" 00:14:12.524 ], 00:14:12.524 "dhchap_dhgroups": [ 00:14:12.524 "null", 00:14:12.524 "ffdhe2048", 00:14:12.524 "ffdhe3072", 00:14:12.524 "ffdhe4096", 00:14:12.524 "ffdhe6144", 00:14:12.524 "ffdhe8192" 00:14:12.524 ] 00:14:12.524 } 00:14:12.524 }, 00:14:12.524 { 00:14:12.524 "method": "bdev_nvme_attach_controller", 00:14:12.524 "params": { 00:14:12.524 "name": "nvme0", 00:14:12.524 "trtype": "TCP", 00:14:12.524 "adrfam": "IPv4", 00:14:12.524 "traddr": "10.0.0.3", 00:14:12.524 "trsvcid": "4420", 00:14:12.524 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:12.524 "prchk_reftag": false, 00:14:12.524 "prchk_guard": false, 00:14:12.524 "ctrlr_loss_timeout_sec": 0, 00:14:12.524 "reconnect_delay_sec": 0, 00:14:12.524 "fast_io_fail_timeout_sec": 0, 00:14:12.524 "psk": "key0", 00:14:12.524 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:12.525 "hdgst": false, 00:14:12.525 "ddgst": false, 00:14:12.525 "multipath": "multipath" 00:14:12.525 } 00:14:12.525 }, 00:14:12.525 { 00:14:12.525 "method": "bdev_nvme_set_hotplug", 00:14:12.525 "params": { 00:14:12.525 "period_us": 100000, 00:14:12.525 "enable": false 00:14:12.525 } 00:14:12.525 }, 00:14:12.525 { 00:14:12.525 "method": "bdev_enable_histogram", 00:14:12.525 "params": { 00:14:12.525 "name": "nvme0n1", 00:14:12.525 "enable": true 00:14:12.525 } 00:14:12.525 }, 00:14:12.525 { 00:14:12.525 "method": "bdev_wait_for_examine" 00:14:12.525 } 00:14:12.525 ] 00:14:12.525 }, 00:14:12.525 { 00:14:12.525 "subsystem": "nbd", 00:14:12.525 "config": [] 00:14:12.525 } 00:14:12.525 ] 00:14:12.525 }' 00:14:12.525 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 72208 00:14:12.525 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72208 ']' 00:14:12.525 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72208 00:14:12.525 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:12.525 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:12.525 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72208 00:14:12.784 killing process with pid 72208 00:14:12.784 Received shutdown signal, test time was about 1.000000 seconds 00:14:12.784 00:14:12.784 Latency(us) 00:14:12.784 [2024-10-16T09:28:37.188Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:12.784 [2024-10-16T09:28:37.188Z] =================================================================================================================== 00:14:12.784 [2024-10-16T09:28:37.188Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:12.784 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:12.784 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:12.784 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72208' 00:14:12.784 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72208 00:14:12.784 09:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72208 00:14:12.784 09:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 72188 00:14:12.784 09:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72188 ']' 00:14:12.784 09:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72188 00:14:12.784 09:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:12.784 09:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:12.784 09:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72188 00:14:12.784 killing process with pid 72188 00:14:12.784 09:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:12.784 09:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:12.784 09:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72188' 00:14:12.784 09:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72188 00:14:12.784 09:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72188 00:14:13.050 09:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:14:13.050 09:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:13.050 09:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:13.050 09:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:14:13.050 "subsystems": [ 00:14:13.050 { 00:14:13.050 "subsystem": "keyring", 00:14:13.050 "config": [ 00:14:13.050 { 00:14:13.050 "method": "keyring_file_add_key", 00:14:13.050 "params": { 00:14:13.050 "name": "key0", 00:14:13.050 "path": "/tmp/tmp.35hJ8IbUnf" 00:14:13.050 } 00:14:13.050 } 00:14:13.050 ] 00:14:13.050 }, 00:14:13.050 { 00:14:13.050 "subsystem": "iobuf", 00:14:13.050 "config": [ 00:14:13.050 { 00:14:13.050 "method": "iobuf_set_options", 00:14:13.050 "params": { 00:14:13.050 "small_pool_count": 8192, 00:14:13.050 "large_pool_count": 1024, 00:14:13.051 "small_bufsize": 8192, 00:14:13.051 "large_bufsize": 135168 00:14:13.051 } 00:14:13.051 } 00:14:13.051 ] 00:14:13.051 }, 00:14:13.051 { 00:14:13.051 "subsystem": "sock", 00:14:13.051 "config": [ 00:14:13.051 { 00:14:13.051 "method": "sock_set_default_impl", 00:14:13.051 "params": { 00:14:13.051 "impl_name": "uring" 00:14:13.051 } 00:14:13.051 }, 00:14:13.051 { 00:14:13.051 "method": "sock_impl_set_options", 00:14:13.051 "params": { 00:14:13.051 "impl_name": "ssl", 00:14:13.051 "recv_buf_size": 4096, 00:14:13.051 "send_buf_size": 4096, 00:14:13.051 "enable_recv_pipe": true, 00:14:13.051 "enable_quickack": false, 00:14:13.051 "enable_placement_id": 0, 00:14:13.051 "enable_zerocopy_send_server": true, 00:14:13.051 "enable_zerocopy_send_client": false, 00:14:13.051 "zerocopy_threshold": 0, 00:14:13.051 "tls_version": 0, 00:14:13.051 "enable_ktls": false 00:14:13.051 } 00:14:13.051 }, 00:14:13.051 { 00:14:13.051 "method": "sock_impl_set_options", 00:14:13.051 "params": { 00:14:13.051 "impl_name": "posix", 00:14:13.052 "recv_buf_size": 2097152, 00:14:13.052 "send_buf_size": 2097152, 00:14:13.052 "enable_recv_pipe": true, 00:14:13.052 "enable_quickack": false, 00:14:13.052 "enable_placement_id": 0, 00:14:13.052 "enable_zerocopy_send_server": true, 00:14:13.052 "enable_zerocopy_send_client": false, 00:14:13.052 "zerocopy_threshold": 0, 00:14:13.052 "tls_version": 0, 00:14:13.052 "enable_ktls": false 00:14:13.052 } 00:14:13.052 }, 00:14:13.052 { 00:14:13.052 "method": "sock_impl_set_options", 00:14:13.052 "params": { 00:14:13.052 "impl_name": "uring", 00:14:13.052 "recv_buf_size": 2097152, 00:14:13.052 "send_buf_size": 2097152, 00:14:13.052 "enable_recv_pipe": true, 00:14:13.052 "enable_quickack": false, 00:14:13.052 "enable_placement_id": 0, 00:14:13.052 "enable_zerocopy_send_server": false, 00:14:13.052 "enable_zerocopy_send_client": false, 00:14:13.052 "zerocopy_threshold": 0, 00:14:13.052 "tls_version": 0, 00:14:13.052 "enable_ktls": false 00:14:13.052 } 00:14:13.052 } 00:14:13.052 ] 00:14:13.052 }, 00:14:13.052 { 00:14:13.052 "subsystem": "vmd", 00:14:13.052 "config": [] 00:14:13.052 }, 00:14:13.052 { 00:14:13.052 "subsystem": "accel", 00:14:13.052 "config": [ 00:14:13.052 { 00:14:13.052 "method": "accel_set_options", 00:14:13.052 "params": { 00:14:13.052 "small_cache_size": 128, 00:14:13.052 "large_cache_size": 16, 00:14:13.052 "task_count": 2048, 00:14:13.052 "sequence_count": 2048, 00:14:13.052 "buf_count": 2048 00:14:13.052 } 00:14:13.052 } 00:14:13.053 ] 00:14:13.053 }, 00:14:13.053 { 00:14:13.053 "subsystem": "bdev", 00:14:13.053 "config": [ 00:14:13.053 { 00:14:13.053 "method": "bdev_set_options", 00:14:13.053 "params": { 00:14:13.053 "bdev_io_pool_size": 65535, 00:14:13.053 "bdev_io_cache_size": 256, 00:14:13.053 "bdev_auto_examine": true, 00:14:13.053 "iobuf_small_cache_size": 128, 00:14:13.053 "iobuf_large_cache_size": 16 00:14:13.053 } 00:14:13.053 }, 00:14:13.053 { 00:14:13.053 "method": "bdev_raid_set_options", 00:14:13.053 "params": { 00:14:13.053 "process_window_size_kb": 1024, 00:14:13.053 "process_max_bandwidth_mb_sec": 0 00:14:13.053 } 00:14:13.053 }, 00:14:13.053 { 00:14:13.053 "method": "bdev_iscsi_set_options", 00:14:13.053 "params": { 00:14:13.053 "timeout_sec": 30 00:14:13.053 } 00:14:13.053 }, 00:14:13.053 { 00:14:13.053 "method": "bdev_nvme_set_options", 00:14:13.053 "params": { 00:14:13.053 "action_on_timeout": "none", 00:14:13.053 "timeout_us": 0, 00:14:13.053 "timeout_admin_us": 0, 00:14:13.053 "keep_alive_timeout_ms": 10000, 00:14:13.053 "arbitration_burst": 0, 00:14:13.053 "low_priority_weight": 0, 00:14:13.053 "medium_priority_weight": 0, 00:14:13.053 "high_priority_weight": 0, 00:14:13.053 "nvme_adminq_poll_period_us": 10000, 00:14:13.053 "nvme_ioq_poll_period_us": 0, 00:14:13.053 "io_queue_requests": 0, 00:14:13.053 "delay_cmd_submit": true, 00:14:13.053 "transport_retry_count": 4, 00:14:13.053 "bdev_retry_count": 3, 00:14:13.053 "transport_ack_timeout": 0, 00:14:13.053 "ctrlr_loss_timeout_sec": 0, 00:14:13.053 "reconnect_delay_sec": 0, 00:14:13.053 "fast_io_fail_timeout_sec": 0, 00:14:13.053 "disable_auto_failback": false, 00:14:13.053 "generate_uuids": false, 00:14:13.053 "transport_tos": 0, 00:14:13.053 "nvme_error_stat": false, 00:14:13.053 "rdma_srq_size": 0, 00:14:13.053 "io_path_stat": false, 00:14:13.053 "allow_accel_sequence": false, 00:14:13.053 "rdma_max_cq_size": 0, 00:14:13.053 "rdma_cm_event_timeout_ms": 0, 00:14:13.053 "dhchap_digests": [ 00:14:13.053 "sha256", 00:14:13.053 "sha384", 00:14:13.053 "sha512" 00:14:13.053 ], 00:14:13.053 "dhchap_dhgroups": [ 00:14:13.053 "null", 00:14:13.053 "ffdhe2048", 00:14:13.053 "ffdhe3072", 00:14:13.053 "ffdhe4096", 00:14:13.053 "ffdhe6144", 00:14:13.053 "ffdhe8192" 00:14:13.053 ] 00:14:13.053 } 00:14:13.053 }, 00:14:13.053 { 00:14:13.053 "method": "bdev_nvme_set_hotplug", 00:14:13.053 "params": { 00:14:13.053 "period_us": 100000, 00:14:13.053 "enable": false 00:14:13.053 } 00:14:13.053 }, 00:14:13.053 { 00:14:13.053 "method": "bdev_malloc_create", 00:14:13.053 "params": { 00:14:13.053 "name": "malloc0", 00:14:13.053 "num_blocks": 8192, 00:14:13.053 "block_size": 4096, 00:14:13.053 "physical_block_size": 4096, 00:14:13.053 "uuid": "82e22ed0-a88a-4941-aace-629d30aafc55", 00:14:13.053 "optimal_io_boundary": 0, 00:14:13.053 "md_size": 0, 00:14:13.053 "dif_type": 0, 00:14:13.053 "dif_is_head_of_md": false, 00:14:13.053 "dif_pi_format": 0 00:14:13.053 } 00:14:13.053 }, 00:14:13.053 { 00:14:13.053 "method": "bdev_wait_for_examine" 00:14:13.053 } 00:14:13.053 ] 00:14:13.053 }, 00:14:13.053 { 00:14:13.053 "subsystem": "nbd", 00:14:13.053 "config": [] 00:14:13.053 }, 00:14:13.053 { 00:14:13.053 "subsystem": "scheduler", 00:14:13.053 "config": [ 00:14:13.053 { 00:14:13.053 "method": "framework_set_scheduler", 00:14:13.053 "params": { 00:14:13.053 "name": "static" 00:14:13.053 } 00:14:13.053 } 00:14:13.053 ] 00:14:13.053 }, 00:14:13.053 { 00:14:13.053 "subsystem": "nvmf", 00:14:13.053 "config": [ 00:14:13.053 { 00:14:13.053 "method": "nvmf_set_config", 00:14:13.053 "params": { 00:14:13.053 "discovery_filter": "match_any", 00:14:13.053 "admin_cmd_passthru": { 00:14:13.053 "identify_ctrlr": false 00:14:13.053 }, 00:14:13.053 "dhchap_digests": [ 00:14:13.053 "sha256", 00:14:13.053 "sha384", 00:14:13.053 "sha512" 00:14:13.053 ], 00:14:13.053 "dhchap_dhgroups": [ 00:14:13.054 "null", 00:14:13.054 "ffdhe2048", 00:14:13.054 "ffdhe3072", 00:14:13.054 "ffdhe4096", 00:14:13.054 "ffdhe6144", 00:14:13.054 "ffdhe8192" 00:14:13.054 ] 00:14:13.054 } 00:14:13.054 }, 00:14:13.054 { 00:14:13.054 "method": "nvmf_set_max_subsystems", 00:14:13.054 "params": { 00:14:13.054 "max_subsystems": 1024 00:14:13.054 } 00:14:13.054 }, 00:14:13.054 { 00:14:13.054 "method": "nvmf_set_crdt", 00:14:13.054 "params": { 00:14:13.054 "crdt1": 0, 00:14:13.054 "crdt2": 0, 00:14:13.054 "crdt3": 0 00:14:13.054 } 00:14:13.054 }, 00:14:13.054 { 00:14:13.054 "method": "nvmf_create_transport", 00:14:13.054 "params": { 00:14:13.054 "trtype": "TCP", 00:14:13.054 "max_queue_depth": 128, 00:14:13.054 "max_io_qpairs_per_ctrlr": 127, 00:14:13.054 "in_capsule_data_size": 4096, 00:14:13.054 "max_io_size": 131072, 00:14:13.054 "io_unit_size": 131072, 00:14:13.054 "max_aq_depth": 128, 00:14:13.054 "num_shared_buffers": 511, 00:14:13.054 "buf_cache_size": 4294967295, 00:14:13.054 "dif_insert_or_strip": false, 00:14:13.054 "zcopy": false, 00:14:13.054 "c2h_success": false, 00:14:13.054 "sock_priority": 0, 00:14:13.054 "abort_timeout_sec": 1, 00:14:13.054 "ack_timeout": 0, 00:14:13.054 "data_wr_pool_size": 0 00:14:13.054 } 00:14:13.054 }, 00:14:13.054 { 00:14:13.054 "method": "nvmf_create_subsystem", 00:14:13.054 "params": { 00:14:13.054 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:13.054 "allow_any_host": false, 00:14:13.054 "serial_number": "00000000000000000000", 00:14:13.054 "model_number": "SPDK bdev Controller", 00:14:13.054 "max_namespaces": 32, 00:14:13.054 "min_cntlid": 1, 00:14:13.054 "max_cntlid": 65519, 00:14:13.054 "ana_reporting": false 00:14:13.054 } 00:14:13.054 }, 00:14:13.054 { 00:14:13.054 "method": "nvmf_subsystem_add_host", 00:14:13.054 "params": { 00:14:13.054 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:13.054 "host": "nqn.2016-06.io.spdk:host1", 00:14:13.054 "psk": "key0" 00:14:13.054 } 00:14:13.054 }, 00:14:13.054 { 00:14:13.054 "method": "nvmf_subsystem_add_ns", 00:14:13.054 "params": { 00:14:13.054 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:13.054 "namespace": { 00:14:13.054 "nsid": 1, 00:14:13.054 "bdev_name": "malloc0", 00:14:13.054 "nguid": "82E22ED0A88A4941AACE629D30AAFC55", 00:14:13.054 "uuid": "82e22ed0-a88a-4941-aace-629d30aafc55", 00:14:13.054 "no_auto_visible": false 00:14:13.054 } 00:14:13.054 } 00:14:13.054 }, 00:14:13.054 { 00:14:13.054 "method": "nvmf_subsystem_add_listener", 00:14:13.054 "params": { 00:14:13.054 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:13.054 "listen_address": { 00:14:13.054 "trtype": "TCP", 00:14:13.054 "adrfam": "IPv4", 00:14:13.054 "traddr": "10.0.0.3", 00:14:13.054 "trsvcid": "4420" 00:14:13.054 }, 00:14:13.054 "secure_channel": false, 00:14:13.054 "sock_impl": "ssl" 00:14:13.054 } 00:14:13.054 } 00:14:13.054 ] 00:14:13.054 } 00:14:13.054 ] 00:14:13.054 }' 00:14:13.054 09:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:13.054 09:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=72258 00:14:13.054 09:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:14:13.054 09:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 72258 00:14:13.054 09:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72258 ']' 00:14:13.054 09:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.054 09:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:13.054 09:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.054 09:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:13.054 09:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:13.054 [2024-10-16 09:28:37.414961] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:14:13.054 [2024-10-16 09:28:37.415219] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:13.312 [2024-10-16 09:28:37.555099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.312 [2024-10-16 09:28:37.598931] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:13.312 [2024-10-16 09:28:37.599237] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:13.312 [2024-10-16 09:28:37.599408] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:13.312 [2024-10-16 09:28:37.599527] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:13.312 [2024-10-16 09:28:37.599604] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:13.312 [2024-10-16 09:28:37.600122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.571 [2024-10-16 09:28:37.765013] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:13.571 [2024-10-16 09:28:37.839496] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:13.571 [2024-10-16 09:28:37.871460] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:13.571 [2024-10-16 09:28:37.871835] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:14.140 09:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:14.140 09:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:14.140 09:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:14.140 09:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:14.140 09:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:14.140 09:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:14.140 09:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=72288 00:14:14.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:14.140 09:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 72288 /var/tmp/bdevperf.sock 00:14:14.140 09:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72288 ']' 00:14:14.140 09:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:14:14.140 09:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:14.140 09:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:14.140 09:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:14.140 09:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:14:14.140 "subsystems": [ 00:14:14.140 { 00:14:14.140 "subsystem": "keyring", 00:14:14.140 "config": [ 00:14:14.140 { 00:14:14.141 "method": "keyring_file_add_key", 00:14:14.141 "params": { 00:14:14.141 "name": "key0", 00:14:14.141 "path": "/tmp/tmp.35hJ8IbUnf" 00:14:14.141 } 00:14:14.141 } 00:14:14.141 ] 00:14:14.141 }, 00:14:14.141 { 00:14:14.141 "subsystem": "iobuf", 00:14:14.141 "config": [ 00:14:14.141 { 00:14:14.141 "method": "iobuf_set_options", 00:14:14.141 "params": { 00:14:14.141 "small_pool_count": 8192, 00:14:14.141 "large_pool_count": 1024, 00:14:14.141 "small_bufsize": 8192, 00:14:14.141 "large_bufsize": 135168 00:14:14.141 } 00:14:14.141 } 00:14:14.141 ] 00:14:14.141 }, 00:14:14.141 { 00:14:14.141 "subsystem": "sock", 00:14:14.141 "config": [ 00:14:14.141 { 00:14:14.141 "method": "sock_set_default_impl", 00:14:14.141 "params": { 00:14:14.141 "impl_name": "uring" 00:14:14.141 } 00:14:14.141 }, 00:14:14.141 { 00:14:14.141 "method": "sock_impl_set_options", 00:14:14.141 "params": { 00:14:14.141 "impl_name": "ssl", 00:14:14.141 "recv_buf_size": 4096, 00:14:14.141 "send_buf_size": 4096, 00:14:14.141 "enable_recv_pipe": true, 00:14:14.141 "enable_quickack": false, 00:14:14.141 "enable_placement_id": 0, 00:14:14.141 "enable_zerocopy_send_server": true, 00:14:14.141 "enable_zerocopy_send_client": false, 00:14:14.141 "zerocopy_threshold": 0, 00:14:14.141 "tls_version": 0, 00:14:14.141 "enable_ktls": false 00:14:14.141 } 00:14:14.141 }, 00:14:14.141 { 00:14:14.141 "method": "sock_impl_set_options", 00:14:14.141 "params": { 00:14:14.141 "impl_name": "posix", 00:14:14.141 "recv_buf_size": 2097152, 00:14:14.141 "send_buf_size": 2097152, 00:14:14.141 "enable_recv_pipe": true, 00:14:14.141 "enable_quickack": false, 00:14:14.141 "enable_placement_id": 0, 00:14:14.141 "enable_zerocopy_send_server": true, 00:14:14.141 "enable_zerocopy_send_client": false, 00:14:14.141 "zerocopy_threshold": 0, 00:14:14.141 "tls_version": 0, 00:14:14.141 "enable_ktls": false 00:14:14.141 } 00:14:14.141 }, 00:14:14.141 { 00:14:14.141 "method": "sock_impl_set_options", 00:14:14.141 "params": { 00:14:14.141 "impl_name": "uring", 00:14:14.141 "recv_buf_size": 2097152, 00:14:14.141 "send_buf_size": 2097152, 00:14:14.141 "enable_recv_pipe": true, 00:14:14.141 "enable_quickack": false, 00:14:14.141 "enable_placement_id": 0, 00:14:14.141 "enable_zerocopy_send_server": false, 00:14:14.141 "enable_zerocopy_send_client": false, 00:14:14.141 "zerocopy_threshold": 0, 00:14:14.141 "tls_version": 0, 00:14:14.141 "enable_ktls": false 00:14:14.141 } 00:14:14.141 } 00:14:14.141 ] 00:14:14.141 }, 00:14:14.141 { 00:14:14.141 "subsystem": "vmd", 00:14:14.141 "config": [] 00:14:14.141 }, 00:14:14.141 { 00:14:14.141 "subsystem": "accel", 00:14:14.141 "config": [ 00:14:14.141 { 00:14:14.141 "method": "accel_set_options", 00:14:14.141 "params": { 00:14:14.141 "small_cache_size": 128, 00:14:14.141 "large_cache_size": 16, 00:14:14.141 "task_count": 2048, 00:14:14.141 "sequence_count": 2048, 00:14:14.141 "buf_count": 2048 00:14:14.141 } 00:14:14.141 } 00:14:14.141 ] 00:14:14.141 }, 00:14:14.141 { 00:14:14.141 "subsystem": "bdev", 00:14:14.141 "config": [ 00:14:14.141 { 00:14:14.141 "method": "bdev_set_options", 00:14:14.141 "params": { 00:14:14.141 "bdev_io_pool_size": 65535, 00:14:14.141 "bdev_io_cache_size": 256, 00:14:14.141 "bdev_auto_examine": true, 00:14:14.141 "iobuf_small_cache_size": 128, 00:14:14.141 "iobuf_large_cache_size": 16 00:14:14.141 } 00:14:14.141 }, 00:14:14.141 { 00:14:14.141 "method": "bdev_raid_set_options", 00:14:14.141 "params": { 00:14:14.141 "process_window_size_kb": 1024, 00:14:14.141 "process_max_bandwidth_mb_sec": 0 00:14:14.141 } 00:14:14.141 }, 00:14:14.141 { 00:14:14.141 "method": "bdev_iscsi_set_options", 00:14:14.141 "params": { 00:14:14.141 "timeout_sec": 30 00:14:14.141 } 00:14:14.141 }, 00:14:14.141 { 00:14:14.141 "method": "bdev_nvme_set_options", 00:14:14.141 "params": { 00:14:14.141 "action_on_timeout": "none", 00:14:14.141 "timeout_us": 0, 00:14:14.141 "timeout_admin_us": 0, 00:14:14.141 "keep_alive_timeout_ms": 10000, 00:14:14.141 "arbitration_burst": 0, 00:14:14.141 "low_priority_weight": 0, 00:14:14.141 "medium_priority_weight": 0, 00:14:14.141 "high_priority_weight": 0, 00:14:14.141 "nvme_adminq_poll_period_us": 10000, 00:14:14.141 "nvme_ioq_poll_period_us": 0, 00:14:14.141 "io_queue_requests": 512, 00:14:14.141 "delay_cmd_submit": true, 00:14:14.141 "transport_retry_count": 4, 00:14:14.141 "bdev_retry_count": 3, 00:14:14.141 "transport_ack_timeout": 0, 00:14:14.141 "ctrlr_loss_timeout_sec": 0, 00:14:14.141 "reconnect_delay_sec": 0, 00:14:14.141 "fast_io_fail_timeout_sec": 0, 00:14:14.141 "disable_auto_failback": false, 00:14:14.141 "generate_uuids": false, 00:14:14.141 "transport_tos": 0, 00:14:14.141 "nvme_error_stat": false, 00:14:14.141 "rdma_srq_size": 0, 00:14:14.141 "io_path_stat": false, 00:14:14.141 "allow_accel_sequence": false, 00:14:14.141 "rdma_max_cq_size": 0, 00:14:14.141 "rdma_cm_event_timeout_ms": 0, 00:14:14.141 "dhchap_digests": [ 00:14:14.141 "sha256", 00:14:14.141 "sha384", 00:14:14.141 "sha512" 00:14:14.141 ], 00:14:14.141 "dhchap_dhgroups": [ 00:14:14.141 "null", 00:14:14.141 "ffdhe2048", 00:14:14.141 "ffdhe3072", 00:14:14.141 "ffdhe4096", 00:14:14.141 "ffdhe6144", 00:14:14.141 "ffdhe8192" 00:14:14.141 ] 00:14:14.141 } 00:14:14.141 }, 00:14:14.141 { 00:14:14.141 "method": "bdev_nvme_attach_controller", 00:14:14.141 "params": { 00:14:14.141 "name": "nvme0", 00:14:14.141 "trtype": "TCP", 00:14:14.141 "adrfam": "IPv4", 00:14:14.141 "traddr": "10.0.0.3", 00:14:14.141 "trsvcid": "4420", 00:14:14.141 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:14.141 "prchk_reftag": false, 00:14:14.141 "prchk_guard": false, 00:14:14.141 "ctrlr_loss_timeout_sec": 0, 00:14:14.141 "reconnect_delay_sec": 0, 00:14:14.141 "fast_io_fail_timeout_sec": 0, 00:14:14.141 "psk": "key0", 00:14:14.141 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:14.141 "hdgst": false, 00:14:14.141 "ddgst": false, 00:14:14.141 "multipath": "multipath" 00:14:14.141 } 00:14:14.141 }, 00:14:14.141 { 00:14:14.141 "method": "bdev_nvme_set_hotplug", 00:14:14.141 "params": { 00:14:14.141 "period_us": 100000, 00:14:14.141 "enable": false 00:14:14.141 } 00:14:14.141 }, 00:14:14.141 { 00:14:14.141 "method": "bdev_enable_histogram", 00:14:14.141 "params": { 00:14:14.141 "name": "nvme0n1", 00:14:14.141 "enable": true 00:14:14.141 } 00:14:14.141 }, 00:14:14.141 { 00:14:14.141 "method": "bdev_wait_for_examine" 00:14:14.141 } 00:14:14.141 ] 00:14:14.141 }, 00:14:14.141 { 00:14:14.141 "subsystem": "nbd", 00:14:14.141 "config": [] 00:14:14.141 } 00:14:14.141 ] 00:14:14.141 }' 00:14:14.141 09:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:14.141 09:28:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:14.141 [2024-10-16 09:28:38.492710] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:14:14.141 [2024-10-16 09:28:38.492988] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72288 ] 00:14:14.401 [2024-10-16 09:28:38.626950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.401 [2024-10-16 09:28:38.677003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:14.660 [2024-10-16 09:28:38.811901] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:14.660 [2024-10-16 09:28:38.857402] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:15.228 09:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:15.228 09:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:15.228 09:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:15.228 09:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:14:15.487 09:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:15.487 09:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:15.487 Running I/O for 1 seconds... 00:14:16.865 4608.00 IOPS, 18.00 MiB/s 00:14:16.865 Latency(us) 00:14:16.865 [2024-10-16T09:28:41.269Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:16.865 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:16.865 Verification LBA range: start 0x0 length 0x2000 00:14:16.865 nvme0n1 : 1.02 4647.08 18.15 0.00 0.00 27269.47 10307.03 20971.52 00:14:16.865 [2024-10-16T09:28:41.269Z] =================================================================================================================== 00:14:16.865 [2024-10-16T09:28:41.269Z] Total : 4647.08 18.15 0.00 0.00 27269.47 10307.03 20971.52 00:14:16.865 { 00:14:16.865 "results": [ 00:14:16.865 { 00:14:16.865 "job": "nvme0n1", 00:14:16.865 "core_mask": "0x2", 00:14:16.865 "workload": "verify", 00:14:16.865 "status": "finished", 00:14:16.865 "verify_range": { 00:14:16.865 "start": 0, 00:14:16.865 "length": 8192 00:14:16.865 }, 00:14:16.865 "queue_depth": 128, 00:14:16.865 "io_size": 4096, 00:14:16.865 "runtime": 1.019135, 00:14:16.865 "iops": 4647.078159419508, 00:14:16.865 "mibps": 18.152649060232452, 00:14:16.865 "io_failed": 0, 00:14:16.865 "io_timeout": 0, 00:14:16.865 "avg_latency_us": 27269.472235872236, 00:14:16.865 "min_latency_us": 10307.025454545455, 00:14:16.865 "max_latency_us": 20971.52 00:14:16.865 } 00:14:16.865 ], 00:14:16.865 "core_count": 1 00:14:16.865 } 00:14:16.865 09:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:14:16.865 09:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:14:16.865 09:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:14:16.865 09:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:14:16.865 09:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:14:16.865 09:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:14:16.866 09:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:16.866 09:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:14:16.866 09:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:14:16.866 09:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:14:16.866 09:28:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:16.866 nvmf_trace.0 00:14:16.866 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:14:16.866 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 72288 00:14:16.866 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72288 ']' 00:14:16.866 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72288 00:14:16.866 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:16.866 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:16.866 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72288 00:14:16.866 killing process with pid 72288 00:14:16.866 Received shutdown signal, test time was about 1.000000 seconds 00:14:16.866 00:14:16.866 Latency(us) 00:14:16.866 [2024-10-16T09:28:41.270Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:16.866 [2024-10-16T09:28:41.270Z] =================================================================================================================== 00:14:16.866 [2024-10-16T09:28:41.270Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:16.866 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:16.866 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:16.866 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72288' 00:14:16.866 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72288 00:14:16.866 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72288 00:14:16.866 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:14:16.866 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:16.866 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:14:16.866 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:16.866 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:14:16.866 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:16.866 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:16.866 rmmod nvme_tcp 00:14:17.125 rmmod nvme_fabrics 00:14:17.125 rmmod nvme_keyring 00:14:17.125 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:17.125 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:14:17.125 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:14:17.125 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@515 -- # '[' -n 72258 ']' 00:14:17.125 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # killprocess 72258 00:14:17.125 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72258 ']' 00:14:17.125 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72258 00:14:17.125 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:17.125 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:17.125 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72258 00:14:17.125 killing process with pid 72258 00:14:17.125 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:17.125 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:17.125 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72258' 00:14:17.125 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72258 00:14:17.125 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72258 00:14:17.384 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:17.384 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:17.384 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:17.384 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:14:17.384 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-save 00:14:17.384 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-restore 00:14:17.384 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:17.384 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:17.384 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:17.384 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:17.384 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:17.384 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:17.384 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:17.384 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:17.384 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:17.384 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:17.384 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:17.384 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:17.384 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:17.384 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:17.384 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:17.384 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:17.385 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:17.385 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.385 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:17.385 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.385 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:14:17.385 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.YnIbv7tMwP /tmp/tmp.mlqQ0o1yUY /tmp/tmp.35hJ8IbUnf 00:14:17.385 00:14:17.385 real 1m19.907s 00:14:17.385 user 2m8.467s 00:14:17.385 sys 0m26.580s 00:14:17.385 ************************************ 00:14:17.385 END TEST nvmf_tls 00:14:17.385 ************************************ 00:14:17.385 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:17.385 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:17.644 09:28:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:17.644 09:28:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:17.644 09:28:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:17.644 09:28:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:17.644 ************************************ 00:14:17.644 START TEST nvmf_fips 00:14:17.644 ************************************ 00:14:17.644 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:17.644 * Looking for test storage... 00:14:17.644 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:14:17.644 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:17.644 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:14:17.644 09:28:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:17.644 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:17.644 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:17.644 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:17.644 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:17.644 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:14:17.644 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:14:17.644 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:14:17.644 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:14:17.644 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:14:17.644 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:14:17.644 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:14:17.644 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:17.644 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:14:17.644 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:14:17.644 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:17.644 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:17.644 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:14:17.644 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:14:17.644 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:17.644 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:14:17.645 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:14:17.645 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:14:17.645 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:14:17.645 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:17.645 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:14:17.645 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:14:17.645 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:17.645 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:17.645 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:14:17.645 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:17.645 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:17.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.645 --rc genhtml_branch_coverage=1 00:14:17.645 --rc genhtml_function_coverage=1 00:14:17.645 --rc genhtml_legend=1 00:14:17.645 --rc geninfo_all_blocks=1 00:14:17.645 --rc geninfo_unexecuted_blocks=1 00:14:17.645 00:14:17.645 ' 00:14:17.645 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:17.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.645 --rc genhtml_branch_coverage=1 00:14:17.645 --rc genhtml_function_coverage=1 00:14:17.645 --rc genhtml_legend=1 00:14:17.645 --rc geninfo_all_blocks=1 00:14:17.645 --rc geninfo_unexecuted_blocks=1 00:14:17.645 00:14:17.645 ' 00:14:17.645 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:17.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.645 --rc genhtml_branch_coverage=1 00:14:17.645 --rc genhtml_function_coverage=1 00:14:17.645 --rc genhtml_legend=1 00:14:17.645 --rc geninfo_all_blocks=1 00:14:17.645 --rc geninfo_unexecuted_blocks=1 00:14:17.645 00:14:17.645 ' 00:14:17.645 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:17.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.645 --rc genhtml_branch_coverage=1 00:14:17.645 --rc genhtml_function_coverage=1 00:14:17.645 --rc genhtml_legend=1 00:14:17.645 --rc geninfo_all_blocks=1 00:14:17.645 --rc geninfo_unexecuted_blocks=1 00:14:17.645 00:14:17.645 ' 00:14:17.645 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:17.645 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:14:17.645 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:17.645 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:17.645 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:17.645 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:17.645 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:17.645 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:17.645 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:17.645 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:17.645 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:17.645 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:17.645 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:14:17.645 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5989d9e2-d339-420e-a2f4-bd87604f111f 00:14:17.645 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:17.645 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:17.645 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:17.645 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:17.645 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:17.645 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:14:17.645 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:17.645 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:17.645 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:17.645 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.645 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.645 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.645 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:14:17.645 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.645 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:14:17.645 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:17.645 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:17.645 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:17.645 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:17.645 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:17.645 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:17.645 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:17.645 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:17.645 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:17.645 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:14:17.905 Error setting digest 00:14:17.905 40A2C028D27F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:14:17.905 40A2C028D27F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:14:17.905 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@458 -- # nvmf_veth_init 00:14:17.906 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:17.906 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:17.906 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:17.906 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:17.906 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:17.906 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:17.906 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:17.906 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:17.906 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:17.906 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:17.906 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:17.906 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:17.906 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:17.906 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:17.906 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:17.906 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:17.906 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:17.906 Cannot find device "nvmf_init_br" 00:14:17.906 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:14:17.906 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:17.906 Cannot find device "nvmf_init_br2" 00:14:17.906 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:14:17.906 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:17.906 Cannot find device "nvmf_tgt_br" 00:14:17.906 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:14:17.906 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:17.906 Cannot find device "nvmf_tgt_br2" 00:14:17.906 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:14:17.906 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:17.906 Cannot find device "nvmf_init_br" 00:14:17.906 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:14:17.906 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:17.906 Cannot find device "nvmf_init_br2" 00:14:17.906 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:14:17.906 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:17.906 Cannot find device "nvmf_tgt_br" 00:14:17.906 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:14:17.906 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:18.165 Cannot find device "nvmf_tgt_br2" 00:14:18.165 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:14:18.165 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:18.165 Cannot find device "nvmf_br" 00:14:18.165 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:14:18.165 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:18.165 Cannot find device "nvmf_init_if" 00:14:18.165 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:14:18.165 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:18.165 Cannot find device "nvmf_init_if2" 00:14:18.165 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:14:18.165 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:18.165 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:18.165 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:14:18.165 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:18.165 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:18.165 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:14:18.165 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:18.165 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:18.165 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:18.165 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:18.165 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:18.165 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:18.165 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:18.165 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:18.165 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:18.165 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:18.165 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:18.165 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:18.165 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:18.165 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:18.165 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:18.165 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:18.165 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:18.165 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:18.165 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:18.165 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:18.165 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:18.165 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:18.165 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:18.428 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:18.428 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:18.428 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:18.428 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:18.428 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:18.428 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:18.428 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:18.428 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:18.428 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:18.428 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:18.428 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:18.428 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:14:18.428 00:14:18.428 --- 10.0.0.3 ping statistics --- 00:14:18.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.428 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:14:18.428 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:18.428 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:18.428 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:14:18.428 00:14:18.428 --- 10.0.0.4 ping statistics --- 00:14:18.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.428 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:14:18.428 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:18.428 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:18.428 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:14:18.428 00:14:18.428 --- 10.0.0.1 ping statistics --- 00:14:18.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.428 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:14:18.428 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:18.428 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:18.428 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:14:18.428 00:14:18.428 --- 10.0.0.2 ping statistics --- 00:14:18.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.428 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:14:18.428 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:18.428 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # return 0 00:14:18.428 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:18.428 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:18.428 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:18.428 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:18.428 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:18.428 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:18.428 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:18.428 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:14:18.428 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:18.428 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:18.429 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:18.429 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # nvmfpid=72611 00:14:18.429 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:18.429 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # waitforlisten 72611 00:14:18.429 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 72611 ']' 00:14:18.429 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.429 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:18.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.429 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.429 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:18.429 09:28:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:18.429 [2024-10-16 09:28:42.765401] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:14:18.429 [2024-10-16 09:28:42.765677] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:18.697 [2024-10-16 09:28:42.906457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.697 [2024-10-16 09:28:42.957790] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:18.697 [2024-10-16 09:28:42.957856] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:18.697 [2024-10-16 09:28:42.957871] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:18.697 [2024-10-16 09:28:42.957883] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:18.697 [2024-10-16 09:28:42.957892] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:18.697 [2024-10-16 09:28:42.958328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:18.697 [2024-10-16 09:28:43.014707] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:18.697 09:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:18.697 09:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:14:18.697 09:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:18.697 09:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:18.697 09:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:18.956 09:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:18.956 09:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:14:18.956 09:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:18.956 09:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:14:18.956 09:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.N4I 00:14:18.956 09:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:18.956 09:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.N4I 00:14:18.956 09:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.N4I 00:14:18.956 09:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.N4I 00:14:18.956 09:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:19.215 [2024-10-16 09:28:43.411150] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:19.215 [2024-10-16 09:28:43.427108] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:19.215 [2024-10-16 09:28:43.427280] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:19.215 malloc0 00:14:19.215 09:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:19.215 09:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=72639 00:14:19.215 09:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:19.215 09:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 72639 /var/tmp/bdevperf.sock 00:14:19.215 09:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 72639 ']' 00:14:19.215 09:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:19.215 09:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:19.215 09:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:19.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:19.215 09:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:19.215 09:28:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:19.215 [2024-10-16 09:28:43.570620] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:14:19.215 [2024-10-16 09:28:43.570919] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72639 ] 00:14:19.474 [2024-10-16 09:28:43.710681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.474 [2024-10-16 09:28:43.763267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:19.474 [2024-10-16 09:28:43.820130] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:20.411 09:28:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:20.411 09:28:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:14:20.411 09:28:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.N4I 00:14:20.411 09:28:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:20.670 [2024-10-16 09:28:44.932296] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:20.670 TLSTESTn1 00:14:20.670 09:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:20.929 Running I/O for 10 seconds... 00:14:22.802 4668.00 IOPS, 18.23 MiB/s [2024-10-16T09:28:48.580Z] 4735.00 IOPS, 18.50 MiB/s [2024-10-16T09:28:49.515Z] 4775.67 IOPS, 18.65 MiB/s [2024-10-16T09:28:50.452Z] 4806.00 IOPS, 18.77 MiB/s [2024-10-16T09:28:51.387Z] 4821.00 IOPS, 18.83 MiB/s [2024-10-16T09:28:52.334Z] 4835.17 IOPS, 18.89 MiB/s [2024-10-16T09:28:53.269Z] 4841.00 IOPS, 18.91 MiB/s [2024-10-16T09:28:54.204Z] 4841.38 IOPS, 18.91 MiB/s [2024-10-16T09:28:55.582Z] 4843.11 IOPS, 18.92 MiB/s [2024-10-16T09:28:55.582Z] 4841.50 IOPS, 18.91 MiB/s 00:14:31.178 Latency(us) 00:14:31.178 [2024-10-16T09:28:55.582Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:31.178 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:31.178 Verification LBA range: start 0x0 length 0x2000 00:14:31.178 TLSTESTn1 : 10.01 4847.67 18.94 0.00 0.00 26360.10 4438.57 22043.93 00:14:31.178 [2024-10-16T09:28:55.582Z] =================================================================================================================== 00:14:31.178 [2024-10-16T09:28:55.582Z] Total : 4847.67 18.94 0.00 0.00 26360.10 4438.57 22043.93 00:14:31.178 { 00:14:31.178 "results": [ 00:14:31.178 { 00:14:31.178 "job": "TLSTESTn1", 00:14:31.178 "core_mask": "0x4", 00:14:31.178 "workload": "verify", 00:14:31.178 "status": "finished", 00:14:31.178 "verify_range": { 00:14:31.178 "start": 0, 00:14:31.178 "length": 8192 00:14:31.178 }, 00:14:31.178 "queue_depth": 128, 00:14:31.178 "io_size": 4096, 00:14:31.178 "runtime": 10.013461, 00:14:31.178 "iops": 4847.674545294579, 00:14:31.178 "mibps": 18.93622869255695, 00:14:31.178 "io_failed": 0, 00:14:31.178 "io_timeout": 0, 00:14:31.178 "avg_latency_us": 26360.095116281682, 00:14:31.178 "min_latency_us": 4438.574545454546, 00:14:31.178 "max_latency_us": 22043.927272727273 00:14:31.178 } 00:14:31.178 ], 00:14:31.178 "core_count": 1 00:14:31.178 } 00:14:31.179 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:14:31.179 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:14:31.179 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:14:31.179 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:14:31.179 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:14:31.179 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:31.179 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:14:31.179 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:14:31.179 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:14:31.179 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:31.179 nvmf_trace.0 00:14:31.179 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:14:31.179 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 72639 00:14:31.179 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 72639 ']' 00:14:31.179 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 72639 00:14:31.179 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:14:31.179 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:31.179 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72639 00:14:31.179 killing process with pid 72639 00:14:31.179 Received shutdown signal, test time was about 10.000000 seconds 00:14:31.179 00:14:31.179 Latency(us) 00:14:31.179 [2024-10-16T09:28:55.583Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:31.179 [2024-10-16T09:28:55.583Z] =================================================================================================================== 00:14:31.179 [2024-10-16T09:28:55.583Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:31.179 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:31.179 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:31.179 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72639' 00:14:31.179 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 72639 00:14:31.179 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 72639 00:14:31.179 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:14:31.179 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:31.179 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:14:31.179 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:31.179 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:14:31.179 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:31.179 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:31.179 rmmod nvme_tcp 00:14:31.179 rmmod nvme_fabrics 00:14:31.179 rmmod nvme_keyring 00:14:31.438 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:31.438 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:14:31.438 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:14:31.438 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@515 -- # '[' -n 72611 ']' 00:14:31.438 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # killprocess 72611 00:14:31.438 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 72611 ']' 00:14:31.438 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 72611 00:14:31.438 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:14:31.438 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:31.438 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72611 00:14:31.438 killing process with pid 72611 00:14:31.438 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:31.438 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:31.438 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72611' 00:14:31.438 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 72611 00:14:31.438 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 72611 00:14:31.438 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:31.438 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:31.438 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:31.438 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:14:31.438 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:31.438 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-save 00:14:31.438 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-restore 00:14:31.438 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:31.438 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:31.438 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:31.697 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:31.697 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:31.697 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:31.697 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:31.697 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:31.697 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:31.697 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:31.697 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:31.697 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:31.697 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:31.697 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:31.697 09:28:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:31.697 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:31.697 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.697 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:31.697 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.697 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:14:31.697 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.N4I 00:14:31.697 00:14:31.697 real 0m14.217s 00:14:31.697 user 0m19.830s 00:14:31.697 sys 0m5.664s 00:14:31.697 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:31.697 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:31.697 ************************************ 00:14:31.697 END TEST nvmf_fips 00:14:31.697 ************************************ 00:14:31.697 09:28:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:14:31.697 09:28:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:31.697 09:28:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:31.697 09:28:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:31.957 ************************************ 00:14:31.957 START TEST nvmf_control_msg_list 00:14:31.957 ************************************ 00:14:31.957 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:14:31.957 * Looking for test storage... 00:14:31.957 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:31.957 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:31.957 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:14:31.957 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:31.957 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:31.957 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:31.957 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:31.957 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:31.957 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:14:31.957 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:14:31.957 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:14:31.957 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:14:31.957 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:14:31.957 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:14:31.957 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:14:31.957 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:31.957 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:14:31.957 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:14:31.957 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:31.957 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:31.957 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:14:31.957 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:14:31.957 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:31.957 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:14:31.957 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:14:31.957 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:14:31.957 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:14:31.957 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:31.957 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:14:31.957 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:14:31.957 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:31.957 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:31.957 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:14:31.957 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:31.957 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:31.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.957 --rc genhtml_branch_coverage=1 00:14:31.957 --rc genhtml_function_coverage=1 00:14:31.957 --rc genhtml_legend=1 00:14:31.957 --rc geninfo_all_blocks=1 00:14:31.957 --rc geninfo_unexecuted_blocks=1 00:14:31.958 00:14:31.958 ' 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:31.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.958 --rc genhtml_branch_coverage=1 00:14:31.958 --rc genhtml_function_coverage=1 00:14:31.958 --rc genhtml_legend=1 00:14:31.958 --rc geninfo_all_blocks=1 00:14:31.958 --rc geninfo_unexecuted_blocks=1 00:14:31.958 00:14:31.958 ' 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:31.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.958 --rc genhtml_branch_coverage=1 00:14:31.958 --rc genhtml_function_coverage=1 00:14:31.958 --rc genhtml_legend=1 00:14:31.958 --rc geninfo_all_blocks=1 00:14:31.958 --rc geninfo_unexecuted_blocks=1 00:14:31.958 00:14:31.958 ' 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:31.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.958 --rc genhtml_branch_coverage=1 00:14:31.958 --rc genhtml_function_coverage=1 00:14:31.958 --rc genhtml_legend=1 00:14:31.958 --rc geninfo_all_blocks=1 00:14:31.958 --rc geninfo_unexecuted_blocks=1 00:14:31.958 00:14:31.958 ' 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=5989d9e2-d339-420e-a2f4-bd87604f111f 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:31.958 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@458 -- # nvmf_veth_init 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:31.958 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:31.959 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:31.959 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:31.959 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:31.959 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:31.959 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:31.959 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:31.959 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:31.959 Cannot find device "nvmf_init_br" 00:14:31.959 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:14:31.959 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:31.959 Cannot find device "nvmf_init_br2" 00:14:31.959 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:14:31.959 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:32.218 Cannot find device "nvmf_tgt_br" 00:14:32.218 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:14:32.218 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:32.218 Cannot find device "nvmf_tgt_br2" 00:14:32.218 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:14:32.218 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:32.218 Cannot find device "nvmf_init_br" 00:14:32.218 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:14:32.218 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:32.218 Cannot find device "nvmf_init_br2" 00:14:32.218 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:14:32.218 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:32.218 Cannot find device "nvmf_tgt_br" 00:14:32.218 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:14:32.218 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:32.218 Cannot find device "nvmf_tgt_br2" 00:14:32.218 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:14:32.218 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:32.218 Cannot find device "nvmf_br" 00:14:32.218 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:14:32.218 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:32.218 Cannot find device "nvmf_init_if" 00:14:32.218 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:14:32.218 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:32.218 Cannot find device "nvmf_init_if2" 00:14:32.218 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:14:32.218 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:32.218 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:32.218 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:14:32.218 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:32.218 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:32.218 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:14:32.218 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:32.218 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:32.218 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:32.218 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:32.218 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:32.218 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:32.218 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:32.218 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:32.218 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:32.218 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:32.218 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:32.218 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:32.218 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:32.218 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:32.218 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:32.218 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:32.218 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:32.218 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:32.218 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:32.218 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:32.218 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:32.218 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:32.218 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:32.218 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:32.478 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:32.478 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:32.478 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:32.478 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:32.478 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:32.478 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:32.478 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:32.478 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:32.478 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:32.478 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:32.478 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:14:32.478 00:14:32.478 --- 10.0.0.3 ping statistics --- 00:14:32.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:32.478 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:14:32.478 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:32.478 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:32.478 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:14:32.478 00:14:32.478 --- 10.0.0.4 ping statistics --- 00:14:32.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:32.478 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:14:32.478 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:32.478 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:32.478 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:14:32.478 00:14:32.478 --- 10.0.0.1 ping statistics --- 00:14:32.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:32.478 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:14:32.478 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:32.478 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:32.478 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:14:32.478 00:14:32.478 --- 10.0.0.2 ping statistics --- 00:14:32.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:32.478 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:14:32.478 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:32.478 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # return 0 00:14:32.478 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:32.478 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:32.478 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:32.478 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:32.478 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:32.478 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:32.478 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:32.478 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:14:32.478 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:32.478 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:32.478 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:32.478 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # nvmfpid=73029 00:14:32.478 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:32.478 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # waitforlisten 73029 00:14:32.478 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 73029 ']' 00:14:32.478 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:32.478 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:32.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:32.478 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:32.478 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:32.478 09:28:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:32.478 [2024-10-16 09:28:56.771381] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:14:32.478 [2024-10-16 09:28:56.771471] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:32.737 [2024-10-16 09:28:56.912855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.737 [2024-10-16 09:28:56.967520] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:32.737 [2024-10-16 09:28:56.967585] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:32.737 [2024-10-16 09:28:56.967600] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:32.737 [2024-10-16 09:28:56.967611] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:32.737 [2024-10-16 09:28:56.967620] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:32.737 [2024-10-16 09:28:56.968070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.737 [2024-10-16 09:28:57.024403] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:32.737 09:28:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:32.737 09:28:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:14:32.737 09:28:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:32.737 09:28:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:32.737 09:28:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:32.737 09:28:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:32.737 09:28:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:14:32.737 09:28:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:14:32.737 09:28:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:14:32.737 09:28:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.737 09:28:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:32.737 [2024-10-16 09:28:57.140996] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:32.995 09:28:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.995 09:28:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:14:32.995 09:28:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.995 09:28:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:32.995 09:28:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.995 09:28:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:14:32.995 09:28:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.995 09:28:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:32.995 Malloc0 00:14:32.995 09:28:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.995 09:28:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:14:32.995 09:28:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.995 09:28:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:32.995 09:28:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.995 09:28:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:14:32.995 09:28:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.995 09:28:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:32.995 [2024-10-16 09:28:57.180833] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:32.995 09:28:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.995 09:28:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=73059 00:14:32.995 09:28:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:32.995 09:28:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=73060 00:14:32.995 09:28:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:32.995 09:28:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=73061 00:14:32.995 09:28:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:32.995 09:28:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 73059 00:14:32.995 [2024-10-16 09:28:57.359020] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:32.995 [2024-10-16 09:28:57.369556] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:32.995 [2024-10-16 09:28:57.369938] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:34.372 Initializing NVMe Controllers 00:14:34.372 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:34.372 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:14:34.372 Initialization complete. Launching workers. 00:14:34.372 ======================================================== 00:14:34.372 Latency(us) 00:14:34.372 Device Information : IOPS MiB/s Average min max 00:14:34.372 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3718.98 14.53 268.52 127.76 812.19 00:14:34.372 ======================================================== 00:14:34.372 Total : 3718.98 14.53 268.52 127.76 812.19 00:14:34.372 00:14:34.372 Initializing NVMe Controllers 00:14:34.372 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:34.372 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:14:34.372 Initialization complete. Launching workers. 00:14:34.372 ======================================================== 00:14:34.372 Latency(us) 00:14:34.372 Device Information : IOPS MiB/s Average min max 00:14:34.372 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3718.00 14.52 268.58 141.55 605.95 00:14:34.372 ======================================================== 00:14:34.372 Total : 3718.00 14.52 268.58 141.55 605.95 00:14:34.372 00:14:34.372 Initializing NVMe Controllers 00:14:34.372 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:34.372 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:14:34.372 Initialization complete. Launching workers. 00:14:34.372 ======================================================== 00:14:34.372 Latency(us) 00:14:34.372 Device Information : IOPS MiB/s Average min max 00:14:34.372 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3725.00 14.55 268.03 166.28 462.62 00:14:34.372 ======================================================== 00:14:34.372 Total : 3725.00 14.55 268.03 166.28 462.62 00:14:34.372 00:14:34.372 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 73060 00:14:34.372 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 73061 00:14:34.372 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:14:34.372 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:14:34.372 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:34.372 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:14:34.372 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:34.372 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:14:34.372 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:34.372 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:34.372 rmmod nvme_tcp 00:14:34.372 rmmod nvme_fabrics 00:14:34.372 rmmod nvme_keyring 00:14:34.372 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:34.372 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:14:34.372 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:14:34.372 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@515 -- # '[' -n 73029 ']' 00:14:34.372 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # killprocess 73029 00:14:34.372 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 73029 ']' 00:14:34.372 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 73029 00:14:34.372 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:14:34.372 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:34.373 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73029 00:14:34.373 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:34.373 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:34.373 killing process with pid 73029 00:14:34.373 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73029' 00:14:34.373 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 73029 00:14:34.373 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 73029 00:14:34.373 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:34.373 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:34.373 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:34.373 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:14:34.373 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-save 00:14:34.373 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:34.373 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-restore 00:14:34.373 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:34.373 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:34.373 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:34.373 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:34.373 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:34.632 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:34.632 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:34.632 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:34.632 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:34.632 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:34.632 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:34.632 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:34.632 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:34.632 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:34.632 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:34.632 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:34.632 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:34.632 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:34.632 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:34.632 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:14:34.632 00:14:34.632 real 0m2.864s 00:14:34.632 user 0m4.756s 00:14:34.632 sys 0m1.315s 00:14:34.632 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:34.632 09:28:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:34.632 ************************************ 00:14:34.632 END TEST nvmf_control_msg_list 00:14:34.632 ************************************ 00:14:34.632 09:28:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:14:34.632 09:28:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:34.632 09:28:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:34.632 09:28:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:34.632 ************************************ 00:14:34.633 START TEST nvmf_wait_for_buf 00:14:34.633 ************************************ 00:14:34.633 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:14:34.892 * Looking for test storage... 00:14:34.892 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:34.892 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:34.892 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:14:34.892 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:34.892 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:34.892 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:34.892 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:34.892 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:34.892 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:14:34.892 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:14:34.892 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:14:34.892 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:14:34.892 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:14:34.892 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:14:34.892 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:14:34.892 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:34.892 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:14:34.892 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:34.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.893 --rc genhtml_branch_coverage=1 00:14:34.893 --rc genhtml_function_coverage=1 00:14:34.893 --rc genhtml_legend=1 00:14:34.893 --rc geninfo_all_blocks=1 00:14:34.893 --rc geninfo_unexecuted_blocks=1 00:14:34.893 00:14:34.893 ' 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:34.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.893 --rc genhtml_branch_coverage=1 00:14:34.893 --rc genhtml_function_coverage=1 00:14:34.893 --rc genhtml_legend=1 00:14:34.893 --rc geninfo_all_blocks=1 00:14:34.893 --rc geninfo_unexecuted_blocks=1 00:14:34.893 00:14:34.893 ' 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:34.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.893 --rc genhtml_branch_coverage=1 00:14:34.893 --rc genhtml_function_coverage=1 00:14:34.893 --rc genhtml_legend=1 00:14:34.893 --rc geninfo_all_blocks=1 00:14:34.893 --rc geninfo_unexecuted_blocks=1 00:14:34.893 00:14:34.893 ' 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:34.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.893 --rc genhtml_branch_coverage=1 00:14:34.893 --rc genhtml_function_coverage=1 00:14:34.893 --rc genhtml_legend=1 00:14:34.893 --rc geninfo_all_blocks=1 00:14:34.893 --rc geninfo_unexecuted_blocks=1 00:14:34.893 00:14:34.893 ' 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=5989d9e2-d339-420e-a2f4-bd87604f111f 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:34.893 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:34.893 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:14:34.894 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:14:34.894 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:14:34.894 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:14:34.894 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:14:34.894 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@458 -- # nvmf_veth_init 00:14:34.894 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:34.894 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:34.894 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:34.894 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:34.894 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:34.894 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:34.894 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:34.894 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:34.894 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:34.894 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:34.894 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:34.894 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:34.894 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:34.894 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:34.894 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:34.894 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:34.894 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:34.894 Cannot find device "nvmf_init_br" 00:14:34.894 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:14:34.894 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:34.894 Cannot find device "nvmf_init_br2" 00:14:34.894 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:14:34.894 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:34.894 Cannot find device "nvmf_tgt_br" 00:14:34.894 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:14:34.894 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:34.894 Cannot find device "nvmf_tgt_br2" 00:14:34.894 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:14:34.894 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:34.894 Cannot find device "nvmf_init_br" 00:14:34.894 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:14:34.894 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:34.894 Cannot find device "nvmf_init_br2" 00:14:34.894 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:14:34.894 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:35.153 Cannot find device "nvmf_tgt_br" 00:14:35.153 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:14:35.153 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:35.153 Cannot find device "nvmf_tgt_br2" 00:14:35.153 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:14:35.153 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:35.153 Cannot find device "nvmf_br" 00:14:35.153 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:14:35.153 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:35.153 Cannot find device "nvmf_init_if" 00:14:35.153 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:14:35.153 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:35.153 Cannot find device "nvmf_init_if2" 00:14:35.153 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:14:35.153 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:35.153 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:35.153 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:14:35.153 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:35.153 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:35.153 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:14:35.153 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:35.153 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:35.153 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:35.153 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:35.153 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:35.153 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:35.153 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:35.153 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:35.154 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:35.154 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:35.154 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:35.154 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:35.154 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:35.154 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:35.154 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:35.154 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:35.154 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:35.154 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:35.154 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:35.154 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:35.154 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:35.154 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:35.154 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:35.154 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:35.154 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:35.431 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:35.431 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:35.431 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:35.431 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:35.431 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:35.431 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:35.431 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:35.431 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:35.431 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:35.431 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:14:35.431 00:14:35.431 --- 10.0.0.3 ping statistics --- 00:14:35.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.431 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:14:35.431 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:35.431 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:35.431 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.033 ms 00:14:35.431 00:14:35.431 --- 10.0.0.4 ping statistics --- 00:14:35.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.431 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:14:35.431 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:35.431 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:35.431 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:14:35.431 00:14:35.431 --- 10.0.0.1 ping statistics --- 00:14:35.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.431 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:14:35.431 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:35.431 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:35.431 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.113 ms 00:14:35.431 00:14:35.431 --- 10.0.0.2 ping statistics --- 00:14:35.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.431 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:14:35.432 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:35.432 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # return 0 00:14:35.432 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:35.432 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:35.432 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:35.432 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:35.432 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:35.432 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:35.432 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:35.432 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:14:35.432 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:35.432 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:35.432 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:35.432 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # nvmfpid=73295 00:14:35.432 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # waitforlisten 73295 00:14:35.432 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 73295 ']' 00:14:35.432 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:14:35.432 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:35.432 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:35.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:35.432 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:35.432 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:35.432 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:35.432 [2024-10-16 09:28:59.686737] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:14:35.432 [2024-10-16 09:28:59.686825] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:35.701 [2024-10-16 09:28:59.828667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:35.701 [2024-10-16 09:28:59.880321] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:35.701 [2024-10-16 09:28:59.880382] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:35.701 [2024-10-16 09:28:59.880396] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:35.701 [2024-10-16 09:28:59.880406] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:35.701 [2024-10-16 09:28:59.880415] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:35.701 [2024-10-16 09:28:59.880857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.701 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:35.701 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:14:35.701 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:35.701 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:35.701 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:35.701 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:35.701 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:14:35.701 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:14:35.701 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:14:35.701 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.701 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:35.701 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.701 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:14:35.701 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.701 09:28:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:35.701 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.701 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:14:35.701 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.701 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:35.701 [2024-10-16 09:29:00.040847] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:35.701 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.701 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:14:35.701 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.701 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:35.701 Malloc0 00:14:35.701 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.701 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:14:35.701 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.701 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:35.961 [2024-10-16 09:29:00.108944] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:35.961 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.961 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:14:35.961 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.961 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:35.961 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.961 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:14:35.961 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.961 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:35.961 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.961 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:14:35.961 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.961 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:35.961 [2024-10-16 09:29:00.133070] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:35.961 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.961 09:29:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:35.961 [2024-10-16 09:29:00.306830] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:37.338 Initializing NVMe Controllers 00:14:37.338 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:37.338 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:14:37.338 Initialization complete. Launching workers. 00:14:37.338 ======================================================== 00:14:37.338 Latency(us) 00:14:37.338 Device Information : IOPS MiB/s Average min max 00:14:37.338 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 489.03 61.13 8179.70 6081.09 15065.99 00:14:37.338 ======================================================== 00:14:37.338 Total : 489.03 61.13 8179.70 6081.09 15065.99 00:14:37.338 00:14:37.338 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:14:37.338 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:14:37.338 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.338 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:37.338 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.338 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4674 00:14:37.338 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4674 -eq 0 ]] 00:14:37.338 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:14:37.338 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:14:37.338 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:37.338 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:14:37.338 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:37.338 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:14:37.338 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:37.338 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:37.338 rmmod nvme_tcp 00:14:37.338 rmmod nvme_fabrics 00:14:37.338 rmmod nvme_keyring 00:14:37.597 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:37.597 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:14:37.597 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:14:37.597 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@515 -- # '[' -n 73295 ']' 00:14:37.597 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # killprocess 73295 00:14:37.597 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 73295 ']' 00:14:37.597 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 73295 00:14:37.597 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:14:37.597 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:37.597 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73295 00:14:37.597 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:37.597 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:37.597 killing process with pid 73295 00:14:37.597 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73295' 00:14:37.597 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 73295 00:14:37.597 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 73295 00:14:37.597 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:37.597 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:37.597 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:37.597 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:14:37.597 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:37.597 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-save 00:14:37.597 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-restore 00:14:37.598 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:37.598 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:37.598 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:37.856 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:37.856 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:37.856 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:37.856 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:37.856 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:37.856 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:37.856 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:37.856 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:37.856 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:37.856 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:37.856 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:37.856 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:37.856 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:37.856 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:37.856 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:37.856 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.856 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:14:37.857 00:14:37.857 real 0m3.225s 00:14:37.857 user 0m2.547s 00:14:37.857 sys 0m0.789s 00:14:37.857 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:37.857 ************************************ 00:14:37.857 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:37.857 END TEST nvmf_wait_for_buf 00:14:37.857 ************************************ 00:14:38.116 09:29:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:14:38.116 09:29:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:14:38.116 09:29:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:38.116 00:14:38.116 real 4m41.179s 00:14:38.116 user 9m51.407s 00:14:38.116 sys 1m3.182s 00:14:38.116 09:29:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:38.116 09:29:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:38.116 ************************************ 00:14:38.116 END TEST nvmf_target_extra 00:14:38.116 ************************************ 00:14:38.116 09:29:02 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:14:38.116 09:29:02 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:38.116 09:29:02 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:38.116 09:29:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:38.116 ************************************ 00:14:38.116 START TEST nvmf_host 00:14:38.116 ************************************ 00:14:38.116 09:29:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:14:38.116 * Looking for test storage... 00:14:38.116 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:14:38.116 09:29:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:38.116 09:29:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:14:38.116 09:29:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:38.116 09:29:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:38.116 09:29:02 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:38.116 09:29:02 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:38.116 09:29:02 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:38.116 09:29:02 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:14:38.116 09:29:02 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:14:38.116 09:29:02 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:14:38.116 09:29:02 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:14:38.116 09:29:02 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:14:38.116 09:29:02 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:14:38.116 09:29:02 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:14:38.116 09:29:02 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:38.116 09:29:02 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:14:38.116 09:29:02 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:14:38.116 09:29:02 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:38.116 09:29:02 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:38.116 09:29:02 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:14:38.116 09:29:02 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:14:38.116 09:29:02 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:38.116 09:29:02 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:14:38.116 09:29:02 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:14:38.116 09:29:02 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:14:38.116 09:29:02 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:14:38.116 09:29:02 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:38.116 09:29:02 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:14:38.116 09:29:02 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:14:38.116 09:29:02 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:38.116 09:29:02 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:38.116 09:29:02 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:14:38.376 09:29:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:38.376 09:29:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:38.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.376 --rc genhtml_branch_coverage=1 00:14:38.376 --rc genhtml_function_coverage=1 00:14:38.376 --rc genhtml_legend=1 00:14:38.376 --rc geninfo_all_blocks=1 00:14:38.376 --rc geninfo_unexecuted_blocks=1 00:14:38.376 00:14:38.376 ' 00:14:38.376 09:29:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:38.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.376 --rc genhtml_branch_coverage=1 00:14:38.376 --rc genhtml_function_coverage=1 00:14:38.376 --rc genhtml_legend=1 00:14:38.376 --rc geninfo_all_blocks=1 00:14:38.376 --rc geninfo_unexecuted_blocks=1 00:14:38.376 00:14:38.376 ' 00:14:38.376 09:29:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:38.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.376 --rc genhtml_branch_coverage=1 00:14:38.376 --rc genhtml_function_coverage=1 00:14:38.376 --rc genhtml_legend=1 00:14:38.376 --rc geninfo_all_blocks=1 00:14:38.376 --rc geninfo_unexecuted_blocks=1 00:14:38.376 00:14:38.376 ' 00:14:38.376 09:29:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:38.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.376 --rc genhtml_branch_coverage=1 00:14:38.376 --rc genhtml_function_coverage=1 00:14:38.376 --rc genhtml_legend=1 00:14:38.376 --rc geninfo_all_blocks=1 00:14:38.376 --rc geninfo_unexecuted_blocks=1 00:14:38.376 00:14:38.376 ' 00:14:38.376 09:29:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:38.376 09:29:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:14:38.376 09:29:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:38.376 09:29:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:38.376 09:29:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:38.376 09:29:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:38.376 09:29:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:38.376 09:29:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:38.376 09:29:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:38.376 09:29:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:38.376 09:29:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:38.376 09:29:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:38.376 09:29:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:14:38.376 09:29:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5989d9e2-d339-420e-a2f4-bd87604f111f 00:14:38.376 09:29:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:38.376 09:29:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:38.376 09:29:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:38.376 09:29:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:38.376 09:29:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:38.376 09:29:02 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:14:38.376 09:29:02 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:38.376 09:29:02 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:38.376 09:29:02 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:38.376 09:29:02 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.376 09:29:02 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.376 09:29:02 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.376 09:29:02 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:14:38.376 09:29:02 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.376 09:29:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:14:38.376 09:29:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:38.376 09:29:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:38.376 09:29:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:38.377 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:38.377 ************************************ 00:14:38.377 START TEST nvmf_identify 00:14:38.377 ************************************ 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:38.377 * Looking for test storage... 00:14:38.377 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:38.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.377 --rc genhtml_branch_coverage=1 00:14:38.377 --rc genhtml_function_coverage=1 00:14:38.377 --rc genhtml_legend=1 00:14:38.377 --rc geninfo_all_blocks=1 00:14:38.377 --rc geninfo_unexecuted_blocks=1 00:14:38.377 00:14:38.377 ' 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:38.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.377 --rc genhtml_branch_coverage=1 00:14:38.377 --rc genhtml_function_coverage=1 00:14:38.377 --rc genhtml_legend=1 00:14:38.377 --rc geninfo_all_blocks=1 00:14:38.377 --rc geninfo_unexecuted_blocks=1 00:14:38.377 00:14:38.377 ' 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:38.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.377 --rc genhtml_branch_coverage=1 00:14:38.377 --rc genhtml_function_coverage=1 00:14:38.377 --rc genhtml_legend=1 00:14:38.377 --rc geninfo_all_blocks=1 00:14:38.377 --rc geninfo_unexecuted_blocks=1 00:14:38.377 00:14:38.377 ' 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:38.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.377 --rc genhtml_branch_coverage=1 00:14:38.377 --rc genhtml_function_coverage=1 00:14:38.377 --rc genhtml_legend=1 00:14:38.377 --rc geninfo_all_blocks=1 00:14:38.377 --rc geninfo_unexecuted_blocks=1 00:14:38.377 00:14:38.377 ' 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5989d9e2-d339-420e-a2f4-bd87604f111f 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:38.377 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:38.377 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:38.378 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:38.378 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:38.378 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:38.378 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:14:38.378 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:38.378 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:38.378 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:38.378 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:38.378 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:38.378 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:38.378 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:38.378 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:38.378 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:14:38.378 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:14:38.378 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:14:38.378 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:14:38.378 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:14:38.378 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@458 -- # nvmf_veth_init 00:14:38.378 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:38.378 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:38.378 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:38.378 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:38.378 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:38.378 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:38.378 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:38.378 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:38.378 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:38.378 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:38.378 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:38.378 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:38.378 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:38.378 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:38.378 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:38.378 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:38.378 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:38.637 Cannot find device "nvmf_init_br" 00:14:38.637 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:14:38.637 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:38.637 Cannot find device "nvmf_init_br2" 00:14:38.637 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:14:38.637 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:38.637 Cannot find device "nvmf_tgt_br" 00:14:38.637 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:14:38.637 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:38.637 Cannot find device "nvmf_tgt_br2" 00:14:38.637 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:14:38.637 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:38.637 Cannot find device "nvmf_init_br" 00:14:38.637 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:14:38.637 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:38.637 Cannot find device "nvmf_init_br2" 00:14:38.637 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:14:38.637 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:38.637 Cannot find device "nvmf_tgt_br" 00:14:38.637 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:14:38.637 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:38.637 Cannot find device "nvmf_tgt_br2" 00:14:38.637 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:14:38.637 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:38.637 Cannot find device "nvmf_br" 00:14:38.637 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:14:38.637 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:38.637 Cannot find device "nvmf_init_if" 00:14:38.637 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:14:38.637 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:38.637 Cannot find device "nvmf_init_if2" 00:14:38.637 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:14:38.637 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:38.637 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:38.637 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:14:38.637 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:38.637 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:38.637 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:14:38.637 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:38.637 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:38.637 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:38.637 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:38.637 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:38.637 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:38.637 09:29:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:38.637 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:38.637 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:38.637 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:38.637 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:38.897 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:38.897 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:38.897 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:38.897 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:38.897 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:38.897 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:38.897 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:38.897 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:38.897 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:38.897 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:38.897 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:38.897 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:38.897 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:38.897 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:38.897 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:38.897 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:38.897 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:38.897 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:38.897 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:38.897 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:38.897 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:38.897 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:38.897 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:38.897 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:14:38.897 00:14:38.897 --- 10.0.0.3 ping statistics --- 00:14:38.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.897 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:14:38.897 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:38.897 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:38.897 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:14:38.897 00:14:38.897 --- 10.0.0.4 ping statistics --- 00:14:38.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.897 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:14:38.897 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:38.897 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:38.897 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:14:38.897 00:14:38.897 --- 10.0.0.1 ping statistics --- 00:14:38.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.897 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:14:38.897 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:38.897 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:38.897 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:14:38.897 00:14:38.897 --- 10.0.0.2 ping statistics --- 00:14:38.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.897 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:14:38.897 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:38.897 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # return 0 00:14:38.898 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:38.898 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:38.898 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:38.898 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:38.898 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:38.898 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:38.898 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:38.898 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:14:38.898 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:38.898 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:38.898 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=73614 00:14:38.898 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:38.898 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:38.898 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 73614 00:14:38.898 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 73614 ']' 00:14:38.898 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.898 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:38.898 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.898 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:38.898 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:38.898 [2024-10-16 09:29:03.270192] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:14:38.898 [2024-10-16 09:29:03.270284] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:39.157 [2024-10-16 09:29:03.413201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:39.157 [2024-10-16 09:29:03.469171] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:39.157 [2024-10-16 09:29:03.469493] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:39.157 [2024-10-16 09:29:03.469689] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:39.157 [2024-10-16 09:29:03.469845] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:39.157 [2024-10-16 09:29:03.469888] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:39.157 [2024-10-16 09:29:03.471239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:39.157 [2024-10-16 09:29:03.471376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:39.157 [2024-10-16 09:29:03.471464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:39.157 [2024-10-16 09:29:03.471465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.157 [2024-10-16 09:29:03.529569] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:39.416 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:39.417 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:14:39.417 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:39.417 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.417 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:39.417 [2024-10-16 09:29:03.608139] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:39.417 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.417 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:14:39.417 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:39.417 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:39.417 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:39.417 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.417 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:39.417 Malloc0 00:14:39.417 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.417 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:39.417 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.417 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:39.417 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.417 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:14:39.417 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.417 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:39.417 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.417 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:39.417 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.417 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:39.417 [2024-10-16 09:29:03.721821] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:39.417 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.417 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:14:39.417 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.417 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:39.417 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.417 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:14:39.417 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.417 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:39.417 [ 00:14:39.417 { 00:14:39.417 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:39.417 "subtype": "Discovery", 00:14:39.417 "listen_addresses": [ 00:14:39.417 { 00:14:39.417 "trtype": "TCP", 00:14:39.417 "adrfam": "IPv4", 00:14:39.417 "traddr": "10.0.0.3", 00:14:39.417 "trsvcid": "4420" 00:14:39.417 } 00:14:39.417 ], 00:14:39.417 "allow_any_host": true, 00:14:39.417 "hosts": [] 00:14:39.417 }, 00:14:39.417 { 00:14:39.417 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:39.417 "subtype": "NVMe", 00:14:39.417 "listen_addresses": [ 00:14:39.417 { 00:14:39.417 "trtype": "TCP", 00:14:39.417 "adrfam": "IPv4", 00:14:39.417 "traddr": "10.0.0.3", 00:14:39.417 "trsvcid": "4420" 00:14:39.417 } 00:14:39.417 ], 00:14:39.417 "allow_any_host": true, 00:14:39.417 "hosts": [], 00:14:39.417 "serial_number": "SPDK00000000000001", 00:14:39.417 "model_number": "SPDK bdev Controller", 00:14:39.417 "max_namespaces": 32, 00:14:39.417 "min_cntlid": 1, 00:14:39.417 "max_cntlid": 65519, 00:14:39.417 "namespaces": [ 00:14:39.417 { 00:14:39.417 "nsid": 1, 00:14:39.417 "bdev_name": "Malloc0", 00:14:39.417 "name": "Malloc0", 00:14:39.417 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:14:39.417 "eui64": "ABCDEF0123456789", 00:14:39.417 "uuid": "d0139ee6-84cd-4084-b535-a604791814b9" 00:14:39.417 } 00:14:39.417 ] 00:14:39.417 } 00:14:39.417 ] 00:14:39.417 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.417 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:14:39.417 [2024-10-16 09:29:03.782255] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:14:39.417 [2024-10-16 09:29:03.782466] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73640 ] 00:14:39.679 [2024-10-16 09:29:03.919621] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:14:39.679 [2024-10-16 09:29:03.919698] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:39.679 [2024-10-16 09:29:03.919705] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:39.679 [2024-10-16 09:29:03.919715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:39.679 [2024-10-16 09:29:03.919723] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:14:39.679 [2024-10-16 09:29:03.919993] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:14:39.679 [2024-10-16 09:29:03.920058] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x173b750 0 00:14:39.679 [2024-10-16 09:29:03.933588] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:39.679 [2024-10-16 09:29:03.933629] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:39.679 [2024-10-16 09:29:03.933651] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:39.679 [2024-10-16 09:29:03.933654] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:39.679 [2024-10-16 09:29:03.933689] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.679 [2024-10-16 09:29:03.933697] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.679 [2024-10-16 09:29:03.933701] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x173b750) 00:14:39.679 [2024-10-16 09:29:03.933713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:39.679 [2024-10-16 09:29:03.933745] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x179f840, cid 0, qid 0 00:14:39.679 [2024-10-16 09:29:03.941608] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.679 [2024-10-16 09:29:03.941643] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.679 [2024-10-16 09:29:03.941663] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.679 [2024-10-16 09:29:03.941668] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x179f840) on tqpair=0x173b750 00:14:39.679 [2024-10-16 09:29:03.941682] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:39.679 [2024-10-16 09:29:03.941691] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:14:39.679 [2024-10-16 09:29:03.941697] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:14:39.679 [2024-10-16 09:29:03.941715] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.679 [2024-10-16 09:29:03.941720] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.679 [2024-10-16 09:29:03.941724] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x173b750) 00:14:39.679 [2024-10-16 09:29:03.941734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.679 [2024-10-16 09:29:03.941761] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x179f840, cid 0, qid 0 00:14:39.679 [2024-10-16 09:29:03.941814] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.679 [2024-10-16 09:29:03.941822] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.679 [2024-10-16 09:29:03.941825] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.679 [2024-10-16 09:29:03.941830] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x179f840) on tqpair=0x173b750 00:14:39.679 [2024-10-16 09:29:03.941835] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:14:39.679 [2024-10-16 09:29:03.941843] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:14:39.679 [2024-10-16 09:29:03.941851] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.679 [2024-10-16 09:29:03.941855] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.679 [2024-10-16 09:29:03.941859] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x173b750) 00:14:39.679 [2024-10-16 09:29:03.941866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.679 [2024-10-16 09:29:03.941902] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x179f840, cid 0, qid 0 00:14:39.679 [2024-10-16 09:29:03.941953] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.679 [2024-10-16 09:29:03.941959] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.679 [2024-10-16 09:29:03.941963] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.679 [2024-10-16 09:29:03.941967] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x179f840) on tqpair=0x173b750 00:14:39.679 [2024-10-16 09:29:03.941973] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:14:39.679 [2024-10-16 09:29:03.941981] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:14:39.679 [2024-10-16 09:29:03.942003] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.679 [2024-10-16 09:29:03.942007] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.679 [2024-10-16 09:29:03.942011] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x173b750) 00:14:39.679 [2024-10-16 09:29:03.942018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.679 [2024-10-16 09:29:03.942036] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x179f840, cid 0, qid 0 00:14:39.679 [2024-10-16 09:29:03.942082] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.679 [2024-10-16 09:29:03.942089] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.679 [2024-10-16 09:29:03.942093] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.679 [2024-10-16 09:29:03.942096] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x179f840) on tqpair=0x173b750 00:14:39.679 [2024-10-16 09:29:03.942102] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:39.679 [2024-10-16 09:29:03.942112] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.680 [2024-10-16 09:29:03.942116] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.680 [2024-10-16 09:29:03.942120] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x173b750) 00:14:39.680 [2024-10-16 09:29:03.942127] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.680 [2024-10-16 09:29:03.942144] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x179f840, cid 0, qid 0 00:14:39.680 [2024-10-16 09:29:03.942185] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.680 [2024-10-16 09:29:03.942191] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.680 [2024-10-16 09:29:03.942195] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.680 [2024-10-16 09:29:03.942199] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x179f840) on tqpair=0x173b750 00:14:39.680 [2024-10-16 09:29:03.942204] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:14:39.680 [2024-10-16 09:29:03.942209] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:14:39.680 [2024-10-16 09:29:03.942216] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:39.680 [2024-10-16 09:29:03.942321] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:14:39.680 [2024-10-16 09:29:03.942326] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:39.680 [2024-10-16 09:29:03.942335] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.680 [2024-10-16 09:29:03.942339] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.680 [2024-10-16 09:29:03.942343] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x173b750) 00:14:39.680 [2024-10-16 09:29:03.942350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.680 [2024-10-16 09:29:03.942369] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x179f840, cid 0, qid 0 00:14:39.680 [2024-10-16 09:29:03.942425] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.680 [2024-10-16 09:29:03.942432] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.680 [2024-10-16 09:29:03.942435] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.680 [2024-10-16 09:29:03.942440] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x179f840) on tqpair=0x173b750 00:14:39.680 [2024-10-16 09:29:03.942445] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:39.680 [2024-10-16 09:29:03.942470] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.680 [2024-10-16 09:29:03.942475] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.680 [2024-10-16 09:29:03.942479] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x173b750) 00:14:39.680 [2024-10-16 09:29:03.942486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.680 [2024-10-16 09:29:03.942504] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x179f840, cid 0, qid 0 00:14:39.680 [2024-10-16 09:29:03.942547] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.680 [2024-10-16 09:29:03.942554] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.680 [2024-10-16 09:29:03.942557] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.680 [2024-10-16 09:29:03.942562] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x179f840) on tqpair=0x173b750 00:14:39.680 [2024-10-16 09:29:03.942566] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:39.680 [2024-10-16 09:29:03.942571] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:14:39.680 [2024-10-16 09:29:03.942579] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:14:39.680 [2024-10-16 09:29:03.942607] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:14:39.680 [2024-10-16 09:29:03.942618] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.680 [2024-10-16 09:29:03.942622] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x173b750) 00:14:39.680 [2024-10-16 09:29:03.942630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.680 [2024-10-16 09:29:03.942651] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x179f840, cid 0, qid 0 00:14:39.680 [2024-10-16 09:29:03.942739] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:39.680 [2024-10-16 09:29:03.942746] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:39.680 [2024-10-16 09:29:03.942750] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:39.680 [2024-10-16 09:29:03.942754] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x173b750): datao=0, datal=4096, cccid=0 00:14:39.680 [2024-10-16 09:29:03.942759] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x179f840) on tqpair(0x173b750): expected_datao=0, payload_size=4096 00:14:39.680 [2024-10-16 09:29:03.942764] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.680 [2024-10-16 09:29:03.942772] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:39.680 [2024-10-16 09:29:03.942776] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:39.680 [2024-10-16 09:29:03.942784] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.680 [2024-10-16 09:29:03.942790] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.680 [2024-10-16 09:29:03.942794] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.680 [2024-10-16 09:29:03.942798] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x179f840) on tqpair=0x173b750 00:14:39.680 [2024-10-16 09:29:03.942806] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:14:39.680 [2024-10-16 09:29:03.942811] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:14:39.680 [2024-10-16 09:29:03.942815] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:14:39.680 [2024-10-16 09:29:03.942821] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:14:39.680 [2024-10-16 09:29:03.942826] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:14:39.680 [2024-10-16 09:29:03.942831] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:14:39.680 [2024-10-16 09:29:03.942839] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:14:39.680 [2024-10-16 09:29:03.942847] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.680 [2024-10-16 09:29:03.942851] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.680 [2024-10-16 09:29:03.942855] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x173b750) 00:14:39.680 [2024-10-16 09:29:03.942863] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:39.680 [2024-10-16 09:29:03.942882] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x179f840, cid 0, qid 0 00:14:39.680 [2024-10-16 09:29:03.942935] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.680 [2024-10-16 09:29:03.942942] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.680 [2024-10-16 09:29:03.942945] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.680 [2024-10-16 09:29:03.942949] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x179f840) on tqpair=0x173b750 00:14:39.680 [2024-10-16 09:29:03.942957] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.680 [2024-10-16 09:29:03.942962] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.680 [2024-10-16 09:29:03.942966] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x173b750) 00:14:39.680 [2024-10-16 09:29:03.942973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:39.680 [2024-10-16 09:29:03.942979] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.680 [2024-10-16 09:29:03.942983] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.680 [2024-10-16 09:29:03.942988] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x173b750) 00:14:39.680 [2024-10-16 09:29:03.942994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:39.680 [2024-10-16 09:29:03.943000] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.680 [2024-10-16 09:29:03.943004] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.680 [2024-10-16 09:29:03.943008] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x173b750) 00:14:39.680 [2024-10-16 09:29:03.943013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:39.680 [2024-10-16 09:29:03.943020] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.680 [2024-10-16 09:29:03.943023] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.680 [2024-10-16 09:29:03.943027] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x173b750) 00:14:39.680 [2024-10-16 09:29:03.943033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:39.680 [2024-10-16 09:29:03.943038] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:14:39.680 [2024-10-16 09:29:03.943052] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:39.680 [2024-10-16 09:29:03.943060] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.680 [2024-10-16 09:29:03.943064] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x173b750) 00:14:39.680 [2024-10-16 09:29:03.943071] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.680 [2024-10-16 09:29:03.943091] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x179f840, cid 0, qid 0 00:14:39.680 [2024-10-16 09:29:03.943098] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x179f9c0, cid 1, qid 0 00:14:39.680 [2024-10-16 09:29:03.943103] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x179fb40, cid 2, qid 0 00:14:39.680 [2024-10-16 09:29:03.943108] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x179fcc0, cid 3, qid 0 00:14:39.680 [2024-10-16 09:29:03.943113] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x179fe40, cid 4, qid 0 00:14:39.680 [2024-10-16 09:29:03.943199] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.680 [2024-10-16 09:29:03.943217] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.680 [2024-10-16 09:29:03.943222] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.680 [2024-10-16 09:29:03.943226] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x179fe40) on tqpair=0x173b750 00:14:39.680 [2024-10-16 09:29:03.943232] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:14:39.680 [2024-10-16 09:29:03.943237] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:14:39.680 [2024-10-16 09:29:03.943248] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.680 [2024-10-16 09:29:03.943252] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x173b750) 00:14:39.681 [2024-10-16 09:29:03.943260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.681 [2024-10-16 09:29:03.943279] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x179fe40, cid 4, qid 0 00:14:39.681 [2024-10-16 09:29:03.943335] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:39.681 [2024-10-16 09:29:03.943342] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:39.681 [2024-10-16 09:29:03.943346] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:39.681 [2024-10-16 09:29:03.943349] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x173b750): datao=0, datal=4096, cccid=4 00:14:39.681 [2024-10-16 09:29:03.943354] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x179fe40) on tqpair(0x173b750): expected_datao=0, payload_size=4096 00:14:39.681 [2024-10-16 09:29:03.943359] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.681 [2024-10-16 09:29:03.943366] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:39.681 [2024-10-16 09:29:03.943370] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:39.681 [2024-10-16 09:29:03.943378] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.681 [2024-10-16 09:29:03.943384] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.681 [2024-10-16 09:29:03.943387] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.681 [2024-10-16 09:29:03.943391] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x179fe40) on tqpair=0x173b750 00:14:39.681 [2024-10-16 09:29:03.943404] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:14:39.681 [2024-10-16 09:29:03.943431] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.681 [2024-10-16 09:29:03.943437] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x173b750) 00:14:39.681 [2024-10-16 09:29:03.943445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.681 [2024-10-16 09:29:03.943452] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.681 [2024-10-16 09:29:03.943456] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.681 [2024-10-16 09:29:03.943460] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x173b750) 00:14:39.681 [2024-10-16 09:29:03.943466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:39.681 [2024-10-16 09:29:03.943487] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x179fe40, cid 4, qid 0 00:14:39.681 [2024-10-16 09:29:03.943494] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x179ffc0, cid 5, qid 0 00:14:39.681 [2024-10-16 09:29:03.943616] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:39.681 [2024-10-16 09:29:03.943625] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:39.681 [2024-10-16 09:29:03.943629] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:39.681 [2024-10-16 09:29:03.943632] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x173b750): datao=0, datal=1024, cccid=4 00:14:39.681 [2024-10-16 09:29:03.943638] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x179fe40) on tqpair(0x173b750): expected_datao=0, payload_size=1024 00:14:39.681 [2024-10-16 09:29:03.943642] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.681 [2024-10-16 09:29:03.943649] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:39.681 [2024-10-16 09:29:03.943653] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:39.681 [2024-10-16 09:29:03.943659] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.681 [2024-10-16 09:29:03.943665] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.681 [2024-10-16 09:29:03.943668] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.681 [2024-10-16 09:29:03.943672] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x179ffc0) on tqpair=0x173b750 00:14:39.681 [2024-10-16 09:29:03.943691] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.681 [2024-10-16 09:29:03.943699] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.681 [2024-10-16 09:29:03.943702] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.681 [2024-10-16 09:29:03.943706] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x179fe40) on tqpair=0x173b750 00:14:39.681 [2024-10-16 09:29:03.943717] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.681 [2024-10-16 09:29:03.943722] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x173b750) 00:14:39.681 [2024-10-16 09:29:03.943729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.681 [2024-10-16 09:29:03.943755] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x179fe40, cid 4, qid 0 00:14:39.681 [2024-10-16 09:29:03.943826] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:39.681 [2024-10-16 09:29:03.943833] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:39.681 [2024-10-16 09:29:03.943837] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:39.681 [2024-10-16 09:29:03.943841] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x173b750): datao=0, datal=3072, cccid=4 00:14:39.681 [2024-10-16 09:29:03.943845] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x179fe40) on tqpair(0x173b750): expected_datao=0, payload_size=3072 00:14:39.681 [2024-10-16 09:29:03.943850] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.681 [2024-10-16 09:29:03.943857] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:39.681 [2024-10-16 09:29:03.943861] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:39.681 [2024-10-16 09:29:03.943869] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.681 [2024-10-16 09:29:03.943875] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.681 [2024-10-16 09:29:03.943879] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.681 [2024-10-16 09:29:03.943883] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x179fe40) on tqpair=0x173b750 00:14:39.681 [2024-10-16 09:29:03.943892] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.681 [2024-10-16 09:29:03.943896] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x173b750) 00:14:39.681 [2024-10-16 09:29:03.943918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.681 [2024-10-16 09:29:03.943941] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x179fe40, cid 4, qid 0 00:14:39.681 [2024-10-16 09:29:03.944006] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:39.681 [2024-10-16 09:29:03.944012] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:39.681 [2024-10-16 09:29:03.944016] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:39.681 [2024-10-16 09:29:03.944020] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x173b750): datao=0, datal=8, cccid=4 00:14:39.681 ===================================================== 00:14:39.681 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:14:39.681 ===================================================== 00:14:39.681 Controller Capabilities/Features 00:14:39.681 ================================ 00:14:39.681 Vendor ID: 0000 00:14:39.681 Subsystem Vendor ID: 0000 00:14:39.681 Serial Number: .................... 00:14:39.681 Model Number: ........................................ 00:14:39.681 Firmware Version: 25.01 00:14:39.681 Recommended Arb Burst: 0 00:14:39.681 IEEE OUI Identifier: 00 00 00 00:14:39.681 Multi-path I/O 00:14:39.681 May have multiple subsystem ports: No 00:14:39.681 May have multiple controllers: No 00:14:39.681 Associated with SR-IOV VF: No 00:14:39.681 Max Data Transfer Size: 131072 00:14:39.681 Max Number of Namespaces: 0 00:14:39.681 Max Number of I/O Queues: 1024 00:14:39.681 NVMe Specification Version (VS): 1.3 00:14:39.681 NVMe Specification Version (Identify): 1.3 00:14:39.681 Maximum Queue Entries: 128 00:14:39.681 Contiguous Queues Required: Yes 00:14:39.681 Arbitration Mechanisms Supported 00:14:39.681 Weighted Round Robin: Not Supported 00:14:39.681 Vendor Specific: Not Supported 00:14:39.681 Reset Timeout: 15000 ms 00:14:39.681 Doorbell Stride: 4 bytes 00:14:39.681 NVM Subsystem Reset: Not Supported 00:14:39.681 Command Sets Supported 00:14:39.681 NVM Command Set: Supported 00:14:39.681 Boot Partition: Not Supported 00:14:39.681 Memory Page Size Minimum: 4096 bytes 00:14:39.681 Memory Page Size Maximum: 4096 bytes 00:14:39.681 Persistent Memory Region: Not Supported 00:14:39.681 Optional Asynchronous Events Supported 00:14:39.681 Namespace Attribute Notices: Not Supported 00:14:39.681 Firmware Activation Notices: Not Supported 00:14:39.681 ANA Change Notices: Not Supported 00:14:39.681 PLE Aggregate Log Change Notices: Not Supported 00:14:39.681 LBA Status Info Alert Notices: Not Supported 00:14:39.681 EGE Aggregate Log Change Notices: Not Supported 00:14:39.681 Normal NVM Subsystem Shutdown event: Not Supported 00:14:39.681 Zone Descriptor Change Notices: Not Supported 00:14:39.681 Discovery Log Change Notices: Supported 00:14:39.681 Controller Attributes 00:14:39.681 128-bit Host Identifier: Not Supported 00:14:39.681 Non-Operational Permissive Mode: Not Supported 00:14:39.681 NVM Sets: Not Supported 00:14:39.681 Read Recovery Levels: Not Supported 00:14:39.681 Endurance Groups: Not Supported 00:14:39.681 Predictable Latency Mode: Not Supported 00:14:39.681 Traffic Based Keep ALive: Not Supported 00:14:39.681 Namespace Granularity: Not Supported 00:14:39.681 SQ Associations: Not Supported 00:14:39.681 UUID List: Not Supported 00:14:39.681 Multi-Domain Subsystem: Not Supported 00:14:39.681 Fixed Capacity Management: Not Supported 00:14:39.681 Variable Capacity Management: Not Supported 00:14:39.681 Delete Endurance Group: Not Supported 00:14:39.681 Delete NVM Set: Not Supported 00:14:39.681 Extended LBA Formats Supported: Not Supported 00:14:39.681 Flexible Data Placement Supported: Not Supported 00:14:39.681 00:14:39.681 Controller Memory Buffer Support 00:14:39.681 ================================ 00:14:39.681 Supported: No 00:14:39.681 00:14:39.681 Persistent Memory Region Support 00:14:39.681 ================================ 00:14:39.681 Supported: No 00:14:39.681 00:14:39.681 Admin Command Set Attributes 00:14:39.681 ============================ 00:14:39.681 Security Send/Receive: Not Supported 00:14:39.681 Format NVM: Not Supported 00:14:39.681 Firmware Activate/Download: Not Supported 00:14:39.681 Namespace Management: Not Supported 00:14:39.681 Device Self-Test: Not Supported 00:14:39.681 Directives: Not Supported 00:14:39.681 NVMe-MI: Not Supported 00:14:39.681 Virtualization Management: Not Supported 00:14:39.682 Doorbell Buffer Config: Not Supported 00:14:39.682 Get LBA Status Capability: Not Supported 00:14:39.682 Command & Feature Lockdown Capability: Not Supported 00:14:39.682 Abort Command Limit: 1 00:14:39.682 Async Event Request Limit: 4 00:14:39.682 Number of Firmware Slots: N/A 00:14:39.682 Firmware Slot 1 Read-Only: N/A 00:14:39.682 Firm[2024-10-16 09:29:03.944024] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x179fe40) on tqpair(0x173b750): expected_datao=0, payload_size=8 00:14:39.682 [2024-10-16 09:29:03.944029] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.682 [2024-10-16 09:29:03.944036] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:39.682 [2024-10-16 09:29:03.944039] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:39.682 [2024-10-16 09:29:03.944054] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.682 [2024-10-16 09:29:03.944061] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.682 [2024-10-16 09:29:03.944064] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.682 [2024-10-16 09:29:03.944068] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x179fe40) on tqpair=0x173b750 00:14:39.682 ware Activation Without Reset: N/A 00:14:39.682 Multiple Update Detection Support: N/A 00:14:39.682 Firmware Update Granularity: No Information Provided 00:14:39.682 Per-Namespace SMART Log: No 00:14:39.682 Asymmetric Namespace Access Log Page: Not Supported 00:14:39.682 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:14:39.682 Command Effects Log Page: Not Supported 00:14:39.682 Get Log Page Extended Data: Supported 00:14:39.682 Telemetry Log Pages: Not Supported 00:14:39.682 Persistent Event Log Pages: Not Supported 00:14:39.682 Supported Log Pages Log Page: May Support 00:14:39.682 Commands Supported & Effects Log Page: Not Supported 00:14:39.682 Feature Identifiers & Effects Log Page:May Support 00:14:39.682 NVMe-MI Commands & Effects Log Page: May Support 00:14:39.682 Data Area 4 for Telemetry Log: Not Supported 00:14:39.682 Error Log Page Entries Supported: 128 00:14:39.682 Keep Alive: Not Supported 00:14:39.682 00:14:39.682 NVM Command Set Attributes 00:14:39.682 ========================== 00:14:39.682 Submission Queue Entry Size 00:14:39.682 Max: 1 00:14:39.682 Min: 1 00:14:39.682 Completion Queue Entry Size 00:14:39.682 Max: 1 00:14:39.682 Min: 1 00:14:39.682 Number of Namespaces: 0 00:14:39.682 Compare Command: Not Supported 00:14:39.682 Write Uncorrectable Command: Not Supported 00:14:39.682 Dataset Management Command: Not Supported 00:14:39.682 Write Zeroes Command: Not Supported 00:14:39.682 Set Features Save Field: Not Supported 00:14:39.682 Reservations: Not Supported 00:14:39.682 Timestamp: Not Supported 00:14:39.682 Copy: Not Supported 00:14:39.682 Volatile Write Cache: Not Present 00:14:39.682 Atomic Write Unit (Normal): 1 00:14:39.682 Atomic Write Unit (PFail): 1 00:14:39.682 Atomic Compare & Write Unit: 1 00:14:39.682 Fused Compare & Write: Supported 00:14:39.682 Scatter-Gather List 00:14:39.682 SGL Command Set: Supported 00:14:39.682 SGL Keyed: Supported 00:14:39.682 SGL Bit Bucket Descriptor: Not Supported 00:14:39.682 SGL Metadata Pointer: Not Supported 00:14:39.682 Oversized SGL: Not Supported 00:14:39.682 SGL Metadata Address: Not Supported 00:14:39.682 SGL Offset: Supported 00:14:39.682 Transport SGL Data Block: Not Supported 00:14:39.682 Replay Protected Memory Block: Not Supported 00:14:39.682 00:14:39.682 Firmware Slot Information 00:14:39.682 ========================= 00:14:39.682 Active slot: 0 00:14:39.682 00:14:39.682 00:14:39.682 Error Log 00:14:39.682 ========= 00:14:39.682 00:14:39.682 Active Namespaces 00:14:39.682 ================= 00:14:39.682 Discovery Log Page 00:14:39.682 ================== 00:14:39.682 Generation Counter: 2 00:14:39.682 Number of Records: 2 00:14:39.682 Record Format: 0 00:14:39.682 00:14:39.682 Discovery Log Entry 0 00:14:39.682 ---------------------- 00:14:39.682 Transport Type: 3 (TCP) 00:14:39.682 Address Family: 1 (IPv4) 00:14:39.682 Subsystem Type: 3 (Current Discovery Subsystem) 00:14:39.682 Entry Flags: 00:14:39.682 Duplicate Returned Information: 1 00:14:39.682 Explicit Persistent Connection Support for Discovery: 1 00:14:39.682 Transport Requirements: 00:14:39.682 Secure Channel: Not Required 00:14:39.682 Port ID: 0 (0x0000) 00:14:39.682 Controller ID: 65535 (0xffff) 00:14:39.682 Admin Max SQ Size: 128 00:14:39.682 Transport Service Identifier: 4420 00:14:39.682 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:14:39.682 Transport Address: 10.0.0.3 00:14:39.682 Discovery Log Entry 1 00:14:39.682 ---------------------- 00:14:39.682 Transport Type: 3 (TCP) 00:14:39.682 Address Family: 1 (IPv4) 00:14:39.682 Subsystem Type: 2 (NVM Subsystem) 00:14:39.682 Entry Flags: 00:14:39.682 Duplicate Returned Information: 0 00:14:39.682 Explicit Persistent Connection Support for Discovery: 0 00:14:39.682 Transport Requirements: 00:14:39.682 Secure Channel: Not Required 00:14:39.682 Port ID: 0 (0x0000) 00:14:39.682 Controller ID: 65535 (0xffff) 00:14:39.682 Admin Max SQ Size: 128 00:14:39.682 Transport Service Identifier: 4420 00:14:39.682 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:14:39.682 Transport Address: 10.0.0.3 [2024-10-16 09:29:03.944175] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:14:39.682 [2024-10-16 09:29:03.944192] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x179f840) on tqpair=0x173b750 00:14:39.682 [2024-10-16 09:29:03.944200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:39.682 [2024-10-16 09:29:03.944206] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x179f9c0) on tqpair=0x173b750 00:14:39.682 [2024-10-16 09:29:03.944210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:39.682 [2024-10-16 09:29:03.944215] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x179fb40) on tqpair=0x173b750 00:14:39.682 [2024-10-16 09:29:03.944220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:39.682 [2024-10-16 09:29:03.944225] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x179fcc0) on tqpair=0x173b750 00:14:39.682 [2024-10-16 09:29:03.944230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:39.682 [2024-10-16 09:29:03.944239] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.682 [2024-10-16 09:29:03.944244] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.682 [2024-10-16 09:29:03.944248] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x173b750) 00:14:39.682 [2024-10-16 09:29:03.944256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.682 [2024-10-16 09:29:03.944281] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x179fcc0, cid 3, qid 0 00:14:39.682 [2024-10-16 09:29:03.944338] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.682 [2024-10-16 09:29:03.944345] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.682 [2024-10-16 09:29:03.944349] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.682 [2024-10-16 09:29:03.944353] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x179fcc0) on tqpair=0x173b750 00:14:39.682 [2024-10-16 09:29:03.944361] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.682 [2024-10-16 09:29:03.944365] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.682 [2024-10-16 09:29:03.944369] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x173b750) 00:14:39.682 [2024-10-16 09:29:03.944377] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.682 [2024-10-16 09:29:03.944399] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x179fcc0, cid 3, qid 0 00:14:39.682 [2024-10-16 09:29:03.944458] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.682 [2024-10-16 09:29:03.944464] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.682 [2024-10-16 09:29:03.944468] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.682 [2024-10-16 09:29:03.944472] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x179fcc0) on tqpair=0x173b750 00:14:39.682 [2024-10-16 09:29:03.944483] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:14:39.682 [2024-10-16 09:29:03.944488] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:14:39.682 [2024-10-16 09:29:03.944499] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.682 [2024-10-16 09:29:03.944504] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.682 [2024-10-16 09:29:03.944507] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x173b750) 00:14:39.682 [2024-10-16 09:29:03.944515] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.682 [2024-10-16 09:29:03.944534] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x179fcc0, cid 3, qid 0 00:14:39.682 [2024-10-16 09:29:03.944630] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.682 [2024-10-16 09:29:03.944645] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.682 [2024-10-16 09:29:03.944649] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.682 [2024-10-16 09:29:03.944654] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x179fcc0) on tqpair=0x173b750 00:14:39.682 [2024-10-16 09:29:03.944666] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.682 [2024-10-16 09:29:03.944671] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.682 [2024-10-16 09:29:03.944675] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x173b750) 00:14:39.682 [2024-10-16 09:29:03.944683] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.682 [2024-10-16 09:29:03.944704] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x179fcc0, cid 3, qid 0 00:14:39.682 [2024-10-16 09:29:03.944756] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.682 [2024-10-16 09:29:03.944763] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.683 [2024-10-16 09:29:03.944766] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.683 [2024-10-16 09:29:03.944771] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x179fcc0) on tqpair=0x173b750 00:14:39.683 [2024-10-16 09:29:03.944781] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.683 [2024-10-16 09:29:03.944786] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.683 [2024-10-16 09:29:03.944790] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x173b750) 00:14:39.683 [2024-10-16 09:29:03.944798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.683 [2024-10-16 09:29:03.944817] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x179fcc0, cid 3, qid 0 00:14:39.683 [2024-10-16 09:29:03.944866] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.683 [2024-10-16 09:29:03.944873] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.683 [2024-10-16 09:29:03.944877] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.683 [2024-10-16 09:29:03.944881] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x179fcc0) on tqpair=0x173b750 00:14:39.683 [2024-10-16 09:29:03.944892] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.683 [2024-10-16 09:29:03.944897] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.683 [2024-10-16 09:29:03.944901] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x173b750) 00:14:39.683 [2024-10-16 09:29:03.944908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.683 [2024-10-16 09:29:03.944927] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x179fcc0, cid 3, qid 0 00:14:39.683 [2024-10-16 09:29:03.945001] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.683 [2024-10-16 09:29:03.945008] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.683 [2024-10-16 09:29:03.945012] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.683 [2024-10-16 09:29:03.945016] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x179fcc0) on tqpair=0x173b750 00:14:39.683 [2024-10-16 09:29:03.945026] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.683 [2024-10-16 09:29:03.945030] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.683 [2024-10-16 09:29:03.945034] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x173b750) 00:14:39.683 [2024-10-16 09:29:03.945041] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.683 [2024-10-16 09:29:03.945058] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x179fcc0, cid 3, qid 0 00:14:39.683 [2024-10-16 09:29:03.945101] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.683 [2024-10-16 09:29:03.945108] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.683 [2024-10-16 09:29:03.945111] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.683 [2024-10-16 09:29:03.945115] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x179fcc0) on tqpair=0x173b750 00:14:39.683 [2024-10-16 09:29:03.945125] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.683 [2024-10-16 09:29:03.945130] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.683 [2024-10-16 09:29:03.945133] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x173b750) 00:14:39.683 [2024-10-16 09:29:03.945141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.683 [2024-10-16 09:29:03.945158] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x179fcc0, cid 3, qid 0 00:14:39.683 [2024-10-16 09:29:03.945248] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.683 [2024-10-16 09:29:03.945257] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.683 [2024-10-16 09:29:03.945261] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.683 [2024-10-16 09:29:03.945265] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x179fcc0) on tqpair=0x173b750 00:14:39.683 [2024-10-16 09:29:03.945276] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.683 [2024-10-16 09:29:03.945281] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.683 [2024-10-16 09:29:03.945285] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x173b750) 00:14:39.683 [2024-10-16 09:29:03.945293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.683 [2024-10-16 09:29:03.945312] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x179fcc0, cid 3, qid 0 00:14:39.683 [2024-10-16 09:29:03.945358] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.683 [2024-10-16 09:29:03.945365] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.683 [2024-10-16 09:29:03.945369] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.683 [2024-10-16 09:29:03.945373] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x179fcc0) on tqpair=0x173b750 00:14:39.683 [2024-10-16 09:29:03.945384] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.683 [2024-10-16 09:29:03.945389] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.683 [2024-10-16 09:29:03.945393] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x173b750) 00:14:39.683 [2024-10-16 09:29:03.945410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.683 [2024-10-16 09:29:03.945435] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x179fcc0, cid 3, qid 0 00:14:39.683 [2024-10-16 09:29:03.945485] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.683 [2024-10-16 09:29:03.945492] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.683 [2024-10-16 09:29:03.945496] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.683 [2024-10-16 09:29:03.945500] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x179fcc0) on tqpair=0x173b750 00:14:39.683 [2024-10-16 09:29:03.945511] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.683 [2024-10-16 09:29:03.945516] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.683 [2024-10-16 09:29:03.945520] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x173b750) 00:14:39.683 [2024-10-16 09:29:03.945528] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.683 [2024-10-16 09:29:03.949567] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x179fcc0, cid 3, qid 0 00:14:39.683 [2024-10-16 09:29:03.949597] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.683 [2024-10-16 09:29:03.949605] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.683 [2024-10-16 09:29:03.949610] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.683 [2024-10-16 09:29:03.949614] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x179fcc0) on tqpair=0x173b750 00:14:39.683 [2024-10-16 09:29:03.949644] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.683 [2024-10-16 09:29:03.949649] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.683 [2024-10-16 09:29:03.949653] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x173b750) 00:14:39.683 [2024-10-16 09:29:03.949662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.683 [2024-10-16 09:29:03.949687] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x179fcc0, cid 3, qid 0 00:14:39.683 [2024-10-16 09:29:03.949736] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.683 [2024-10-16 09:29:03.949743] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.683 [2024-10-16 09:29:03.949747] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.683 [2024-10-16 09:29:03.949751] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x179fcc0) on tqpair=0x173b750 00:14:39.683 [2024-10-16 09:29:03.949760] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:14:39.683 00:14:39.683 09:29:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:14:39.683 [2024-10-16 09:29:03.988397] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:14:39.683 [2024-10-16 09:29:03.988446] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73643 ] 00:14:39.948 [2024-10-16 09:29:04.125959] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:14:39.948 [2024-10-16 09:29:04.126015] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:39.948 [2024-10-16 09:29:04.126022] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:39.948 [2024-10-16 09:29:04.126031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:39.948 [2024-10-16 09:29:04.126039] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:14:39.948 [2024-10-16 09:29:04.126282] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:14:39.948 [2024-10-16 09:29:04.126329] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xc15750 0 00:14:39.948 [2024-10-16 09:29:04.130683] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:39.948 [2024-10-16 09:29:04.130710] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:39.948 [2024-10-16 09:29:04.130716] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:39.948 [2024-10-16 09:29:04.130720] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:39.948 [2024-10-16 09:29:04.130755] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.948 [2024-10-16 09:29:04.130768] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.948 [2024-10-16 09:29:04.130772] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc15750) 00:14:39.948 [2024-10-16 09:29:04.130784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:39.948 [2024-10-16 09:29:04.130814] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79840, cid 0, qid 0 00:14:39.948 [2024-10-16 09:29:04.138650] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.948 [2024-10-16 09:29:04.138675] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.948 [2024-10-16 09:29:04.138681] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.948 [2024-10-16 09:29:04.138686] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79840) on tqpair=0xc15750 00:14:39.948 [2024-10-16 09:29:04.138699] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:39.948 [2024-10-16 09:29:04.138708] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:14:39.948 [2024-10-16 09:29:04.138714] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:14:39.948 [2024-10-16 09:29:04.138731] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.948 [2024-10-16 09:29:04.138743] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.948 [2024-10-16 09:29:04.138747] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc15750) 00:14:39.948 [2024-10-16 09:29:04.138757] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.948 [2024-10-16 09:29:04.138784] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79840, cid 0, qid 0 00:14:39.948 [2024-10-16 09:29:04.138839] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.948 [2024-10-16 09:29:04.138847] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.948 [2024-10-16 09:29:04.138851] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.948 [2024-10-16 09:29:04.138856] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79840) on tqpair=0xc15750 00:14:39.948 [2024-10-16 09:29:04.138861] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:14:39.948 [2024-10-16 09:29:04.138870] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:14:39.948 [2024-10-16 09:29:04.138878] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.948 [2024-10-16 09:29:04.138883] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.949 [2024-10-16 09:29:04.138887] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc15750) 00:14:39.949 [2024-10-16 09:29:04.138895] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.949 [2024-10-16 09:29:04.138915] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79840, cid 0, qid 0 00:14:39.949 [2024-10-16 09:29:04.138960] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.949 [2024-10-16 09:29:04.138973] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.949 [2024-10-16 09:29:04.138978] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.949 [2024-10-16 09:29:04.138982] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79840) on tqpair=0xc15750 00:14:39.949 [2024-10-16 09:29:04.138988] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:14:39.949 [2024-10-16 09:29:04.138998] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:14:39.949 [2024-10-16 09:29:04.139006] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.949 [2024-10-16 09:29:04.139011] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.949 [2024-10-16 09:29:04.139015] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc15750) 00:14:39.949 [2024-10-16 09:29:04.139023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.949 [2024-10-16 09:29:04.139042] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79840, cid 0, qid 0 00:14:39.949 [2024-10-16 09:29:04.139118] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.949 [2024-10-16 09:29:04.139125] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.949 [2024-10-16 09:29:04.139129] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.949 [2024-10-16 09:29:04.139134] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79840) on tqpair=0xc15750 00:14:39.949 [2024-10-16 09:29:04.139139] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:39.949 [2024-10-16 09:29:04.139150] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.949 [2024-10-16 09:29:04.139156] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.949 [2024-10-16 09:29:04.139160] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc15750) 00:14:39.949 [2024-10-16 09:29:04.139168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.949 [2024-10-16 09:29:04.139186] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79840, cid 0, qid 0 00:14:39.949 [2024-10-16 09:29:04.139234] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.949 [2024-10-16 09:29:04.139247] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.949 [2024-10-16 09:29:04.139252] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.949 [2024-10-16 09:29:04.139256] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79840) on tqpair=0xc15750 00:14:39.949 [2024-10-16 09:29:04.139262] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:14:39.949 [2024-10-16 09:29:04.139268] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:14:39.949 [2024-10-16 09:29:04.139276] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:39.949 [2024-10-16 09:29:04.139383] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:14:39.949 [2024-10-16 09:29:04.139387] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:39.949 [2024-10-16 09:29:04.139396] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.949 [2024-10-16 09:29:04.139401] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.949 [2024-10-16 09:29:04.139405] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc15750) 00:14:39.949 [2024-10-16 09:29:04.139413] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.949 [2024-10-16 09:29:04.139433] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79840, cid 0, qid 0 00:14:39.949 [2024-10-16 09:29:04.139483] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.949 [2024-10-16 09:29:04.139490] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.949 [2024-10-16 09:29:04.139495] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.949 [2024-10-16 09:29:04.139499] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79840) on tqpair=0xc15750 00:14:39.949 [2024-10-16 09:29:04.139505] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:39.949 [2024-10-16 09:29:04.139515] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.949 [2024-10-16 09:29:04.139521] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.949 [2024-10-16 09:29:04.139525] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc15750) 00:14:39.949 [2024-10-16 09:29:04.139533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.949 [2024-10-16 09:29:04.139564] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79840, cid 0, qid 0 00:14:39.949 [2024-10-16 09:29:04.139621] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.949 [2024-10-16 09:29:04.139633] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.949 [2024-10-16 09:29:04.139638] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.949 [2024-10-16 09:29:04.139642] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79840) on tqpair=0xc15750 00:14:39.949 [2024-10-16 09:29:04.139648] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:39.949 [2024-10-16 09:29:04.139653] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:14:39.949 [2024-10-16 09:29:04.139662] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:14:39.949 [2024-10-16 09:29:04.139678] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:14:39.949 [2024-10-16 09:29:04.139688] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.949 [2024-10-16 09:29:04.139692] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc15750) 00:14:39.949 [2024-10-16 09:29:04.139700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.949 [2024-10-16 09:29:04.139721] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79840, cid 0, qid 0 00:14:39.949 [2024-10-16 09:29:04.139813] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:39.949 [2024-10-16 09:29:04.139829] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:39.949 [2024-10-16 09:29:04.139839] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:39.949 [2024-10-16 09:29:04.139843] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc15750): datao=0, datal=4096, cccid=0 00:14:39.949 [2024-10-16 09:29:04.139849] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc79840) on tqpair(0xc15750): expected_datao=0, payload_size=4096 00:14:39.949 [2024-10-16 09:29:04.139854] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.949 [2024-10-16 09:29:04.139862] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:39.949 [2024-10-16 09:29:04.139867] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:39.949 [2024-10-16 09:29:04.139876] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.949 [2024-10-16 09:29:04.139883] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.949 [2024-10-16 09:29:04.139887] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.949 [2024-10-16 09:29:04.139891] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79840) on tqpair=0xc15750 00:14:39.949 [2024-10-16 09:29:04.139900] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:14:39.949 [2024-10-16 09:29:04.139905] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:14:39.949 [2024-10-16 09:29:04.139910] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:14:39.949 [2024-10-16 09:29:04.139915] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:14:39.949 [2024-10-16 09:29:04.139920] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:14:39.949 [2024-10-16 09:29:04.139925] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:14:39.949 [2024-10-16 09:29:04.139935] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:14:39.949 [2024-10-16 09:29:04.139943] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.949 [2024-10-16 09:29:04.139948] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.949 [2024-10-16 09:29:04.139952] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc15750) 00:14:39.949 [2024-10-16 09:29:04.139960] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:39.949 [2024-10-16 09:29:04.139981] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79840, cid 0, qid 0 00:14:39.949 [2024-10-16 09:29:04.140031] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.949 [2024-10-16 09:29:04.140043] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.949 [2024-10-16 09:29:04.140047] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.949 [2024-10-16 09:29:04.140052] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79840) on tqpair=0xc15750 00:14:39.949 [2024-10-16 09:29:04.140060] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.949 [2024-10-16 09:29:04.140064] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.949 [2024-10-16 09:29:04.140068] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc15750) 00:14:39.949 [2024-10-16 09:29:04.140076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:39.950 [2024-10-16 09:29:04.140082] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.950 [2024-10-16 09:29:04.140087] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.950 [2024-10-16 09:29:04.140091] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xc15750) 00:14:39.950 [2024-10-16 09:29:04.140097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:39.950 [2024-10-16 09:29:04.140104] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.950 [2024-10-16 09:29:04.140108] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.950 [2024-10-16 09:29:04.140112] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xc15750) 00:14:39.950 [2024-10-16 09:29:04.140118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:39.950 [2024-10-16 09:29:04.140125] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.950 [2024-10-16 09:29:04.140129] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.950 [2024-10-16 09:29:04.140133] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc15750) 00:14:39.950 [2024-10-16 09:29:04.140139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:39.950 [2024-10-16 09:29:04.140145] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:39.950 [2024-10-16 09:29:04.140160] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:39.950 [2024-10-16 09:29:04.140168] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.950 [2024-10-16 09:29:04.140173] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc15750) 00:14:39.950 [2024-10-16 09:29:04.140180] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.950 [2024-10-16 09:29:04.140202] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79840, cid 0, qid 0 00:14:39.950 [2024-10-16 09:29:04.140209] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc799c0, cid 1, qid 0 00:14:39.950 [2024-10-16 09:29:04.140214] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79b40, cid 2, qid 0 00:14:39.950 [2024-10-16 09:29:04.140220] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79cc0, cid 3, qid 0 00:14:39.950 [2024-10-16 09:29:04.140225] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79e40, cid 4, qid 0 00:14:39.950 [2024-10-16 09:29:04.140312] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.950 [2024-10-16 09:29:04.140327] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.950 [2024-10-16 09:29:04.140332] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.950 [2024-10-16 09:29:04.140337] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79e40) on tqpair=0xc15750 00:14:39.950 [2024-10-16 09:29:04.140343] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:14:39.950 [2024-10-16 09:29:04.140349] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:39.950 [2024-10-16 09:29:04.140358] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:14:39.950 [2024-10-16 09:29:04.140370] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:39.950 [2024-10-16 09:29:04.140378] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.950 [2024-10-16 09:29:04.140383] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.950 [2024-10-16 09:29:04.140387] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc15750) 00:14:39.950 [2024-10-16 09:29:04.140395] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:39.950 [2024-10-16 09:29:04.140416] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79e40, cid 4, qid 0 00:14:39.950 [2024-10-16 09:29:04.140462] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.950 [2024-10-16 09:29:04.140469] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.950 [2024-10-16 09:29:04.140473] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.950 [2024-10-16 09:29:04.140478] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79e40) on tqpair=0xc15750 00:14:39.950 [2024-10-16 09:29:04.140562] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:14:39.950 [2024-10-16 09:29:04.140582] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:39.950 [2024-10-16 09:29:04.140592] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.950 [2024-10-16 09:29:04.140597] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc15750) 00:14:39.950 [2024-10-16 09:29:04.140605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.950 [2024-10-16 09:29:04.140627] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79e40, cid 4, qid 0 00:14:39.950 [2024-10-16 09:29:04.140693] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:39.950 [2024-10-16 09:29:04.140712] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:39.950 [2024-10-16 09:29:04.140717] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:39.950 [2024-10-16 09:29:04.140721] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc15750): datao=0, datal=4096, cccid=4 00:14:39.950 [2024-10-16 09:29:04.140727] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc79e40) on tqpair(0xc15750): expected_datao=0, payload_size=4096 00:14:39.950 [2024-10-16 09:29:04.140732] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.950 [2024-10-16 09:29:04.140739] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:39.950 [2024-10-16 09:29:04.140744] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:39.950 [2024-10-16 09:29:04.140753] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.950 [2024-10-16 09:29:04.140760] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.950 [2024-10-16 09:29:04.140764] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.950 [2024-10-16 09:29:04.140768] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79e40) on tqpair=0xc15750 00:14:39.950 [2024-10-16 09:29:04.140779] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:14:39.950 [2024-10-16 09:29:04.140793] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:14:39.950 [2024-10-16 09:29:04.140805] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:14:39.950 [2024-10-16 09:29:04.140813] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.950 [2024-10-16 09:29:04.140817] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc15750) 00:14:39.950 [2024-10-16 09:29:04.140825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.950 [2024-10-16 09:29:04.140847] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79e40, cid 4, qid 0 00:14:39.950 [2024-10-16 09:29:04.140917] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:39.950 [2024-10-16 09:29:04.140925] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:39.950 [2024-10-16 09:29:04.140929] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:39.950 [2024-10-16 09:29:04.140933] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc15750): datao=0, datal=4096, cccid=4 00:14:39.950 [2024-10-16 09:29:04.140938] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc79e40) on tqpair(0xc15750): expected_datao=0, payload_size=4096 00:14:39.950 [2024-10-16 09:29:04.140943] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.950 [2024-10-16 09:29:04.140951] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:39.950 [2024-10-16 09:29:04.140955] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:39.950 [2024-10-16 09:29:04.140964] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.950 [2024-10-16 09:29:04.140971] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.950 [2024-10-16 09:29:04.140975] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.950 [2024-10-16 09:29:04.140979] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79e40) on tqpair=0xc15750 00:14:39.950 [2024-10-16 09:29:04.140995] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:39.950 [2024-10-16 09:29:04.141007] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:39.950 [2024-10-16 09:29:04.141015] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.950 [2024-10-16 09:29:04.141020] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc15750) 00:14:39.950 [2024-10-16 09:29:04.141027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.950 [2024-10-16 09:29:04.141048] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79e40, cid 4, qid 0 00:14:39.950 [2024-10-16 09:29:04.141104] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:39.950 [2024-10-16 09:29:04.141111] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:39.950 [2024-10-16 09:29:04.141115] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:39.950 [2024-10-16 09:29:04.141119] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc15750): datao=0, datal=4096, cccid=4 00:14:39.950 [2024-10-16 09:29:04.141125] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc79e40) on tqpair(0xc15750): expected_datao=0, payload_size=4096 00:14:39.950 [2024-10-16 09:29:04.141130] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.950 [2024-10-16 09:29:04.141137] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:39.950 [2024-10-16 09:29:04.141141] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:39.950 [2024-10-16 09:29:04.141150] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.950 [2024-10-16 09:29:04.141157] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.950 [2024-10-16 09:29:04.141161] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.950 [2024-10-16 09:29:04.141165] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79e40) on tqpair=0xc15750 00:14:39.950 [2024-10-16 09:29:04.141174] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:39.950 [2024-10-16 09:29:04.141196] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:14:39.950 [2024-10-16 09:29:04.141208] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:14:39.950 [2024-10-16 09:29:04.141216] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:14:39.950 [2024-10-16 09:29:04.141221] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:39.950 [2024-10-16 09:29:04.141227] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:14:39.950 [2024-10-16 09:29:04.141232] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:14:39.950 [2024-10-16 09:29:04.141238] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:14:39.951 [2024-10-16 09:29:04.141244] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:14:39.951 [2024-10-16 09:29:04.141259] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.951 [2024-10-16 09:29:04.141264] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc15750) 00:14:39.951 [2024-10-16 09:29:04.141271] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.951 [2024-10-16 09:29:04.141279] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.951 [2024-10-16 09:29:04.141284] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.951 [2024-10-16 09:29:04.141288] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc15750) 00:14:39.951 [2024-10-16 09:29:04.141294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:39.951 [2024-10-16 09:29:04.141316] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79e40, cid 4, qid 0 00:14:39.951 [2024-10-16 09:29:04.141324] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79fc0, cid 5, qid 0 00:14:39.951 [2024-10-16 09:29:04.141391] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.951 [2024-10-16 09:29:04.141404] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.951 [2024-10-16 09:29:04.141409] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.951 [2024-10-16 09:29:04.141413] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79e40) on tqpair=0xc15750 00:14:39.951 [2024-10-16 09:29:04.141421] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.951 [2024-10-16 09:29:04.141427] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.951 [2024-10-16 09:29:04.141431] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.951 [2024-10-16 09:29:04.141435] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79fc0) on tqpair=0xc15750 00:14:39.951 [2024-10-16 09:29:04.141447] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.951 [2024-10-16 09:29:04.141452] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc15750) 00:14:39.951 [2024-10-16 09:29:04.141459] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.951 [2024-10-16 09:29:04.141479] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79fc0, cid 5, qid 0 00:14:39.951 [2024-10-16 09:29:04.141526] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.951 [2024-10-16 09:29:04.141548] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.951 [2024-10-16 09:29:04.141555] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.951 [2024-10-16 09:29:04.141559] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79fc0) on tqpair=0xc15750 00:14:39.951 [2024-10-16 09:29:04.141571] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.951 [2024-10-16 09:29:04.141576] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc15750) 00:14:39.951 [2024-10-16 09:29:04.141584] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.951 [2024-10-16 09:29:04.141610] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79fc0, cid 5, qid 0 00:14:39.951 [2024-10-16 09:29:04.141657] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.951 [2024-10-16 09:29:04.141665] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.951 [2024-10-16 09:29:04.141669] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.951 [2024-10-16 09:29:04.141673] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79fc0) on tqpair=0xc15750 00:14:39.951 [2024-10-16 09:29:04.141684] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.951 [2024-10-16 09:29:04.141689] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc15750) 00:14:39.951 [2024-10-16 09:29:04.141697] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.951 [2024-10-16 09:29:04.141714] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79fc0, cid 5, qid 0 00:14:39.951 [2024-10-16 09:29:04.141764] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.951 [2024-10-16 09:29:04.141771] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.951 [2024-10-16 09:29:04.141775] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.951 [2024-10-16 09:29:04.141780] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79fc0) on tqpair=0xc15750 00:14:39.951 [2024-10-16 09:29:04.141799] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.951 [2024-10-16 09:29:04.141811] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc15750) 00:14:39.951 [2024-10-16 09:29:04.141819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.951 [2024-10-16 09:29:04.141827] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.951 [2024-10-16 09:29:04.141832] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc15750) 00:14:39.951 [2024-10-16 09:29:04.141838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.951 [2024-10-16 09:29:04.141846] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.951 [2024-10-16 09:29:04.141850] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xc15750) 00:14:39.951 [2024-10-16 09:29:04.141857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.951 [2024-10-16 09:29:04.141870] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.951 [2024-10-16 09:29:04.141876] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xc15750) 00:14:39.951 [2024-10-16 09:29:04.141882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.951 [2024-10-16 09:29:04.141904] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79fc0, cid 5, qid 0 00:14:39.951 [2024-10-16 09:29:04.141912] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79e40, cid 4, qid 0 00:14:39.951 [2024-10-16 09:29:04.141917] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7a140, cid 6, qid 0 00:14:39.951 [2024-10-16 09:29:04.141922] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7a2c0, cid 7, qid 0 00:14:39.951 [2024-10-16 09:29:04.142054] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:39.951 [2024-10-16 09:29:04.142070] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:39.951 [2024-10-16 09:29:04.142075] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:39.951 [2024-10-16 09:29:04.142080] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc15750): datao=0, datal=8192, cccid=5 00:14:39.951 [2024-10-16 09:29:04.142085] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc79fc0) on tqpair(0xc15750): expected_datao=0, payload_size=8192 00:14:39.951 [2024-10-16 09:29:04.142090] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.951 [2024-10-16 09:29:04.142107] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:39.951 [2024-10-16 09:29:04.142116] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:39.951 [2024-10-16 09:29:04.142123] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:39.951 [2024-10-16 09:29:04.142129] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:39.951 [2024-10-16 09:29:04.142133] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:39.951 [2024-10-16 09:29:04.142137] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc15750): datao=0, datal=512, cccid=4 00:14:39.951 [2024-10-16 09:29:04.142142] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc79e40) on tqpair(0xc15750): expected_datao=0, payload_size=512 00:14:39.951 [2024-10-16 09:29:04.142147] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.951 [2024-10-16 09:29:04.142154] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:39.951 [2024-10-16 09:29:04.142158] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:39.951 [2024-10-16 09:29:04.142164] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:39.951 [2024-10-16 09:29:04.142184] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:39.951 [2024-10-16 09:29:04.142188] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:39.951 [2024-10-16 09:29:04.142192] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc15750): datao=0, datal=512, cccid=6 00:14:39.951 [2024-10-16 09:29:04.142197] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc7a140) on tqpair(0xc15750): expected_datao=0, payload_size=512 00:14:39.951 [2024-10-16 09:29:04.142201] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.951 [2024-10-16 09:29:04.142208] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:39.951 [2024-10-16 09:29:04.142211] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:39.951 [2024-10-16 09:29:04.142217] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:39.951 [2024-10-16 09:29:04.142223] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:39.951 [2024-10-16 09:29:04.142227] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:39.951 [2024-10-16 09:29:04.142231] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc15750): datao=0, datal=4096, cccid=7 00:14:39.951 [2024-10-16 09:29:04.142235] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc7a2c0) on tqpair(0xc15750): expected_datao=0, payload_size=4096 00:14:39.951 [2024-10-16 09:29:04.142240] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.951 [2024-10-16 09:29:04.142246] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:39.951 [2024-10-16 09:29:04.142250] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:39.951 [2024-10-16 09:29:04.142256] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.951 [2024-10-16 09:29:04.142262] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.951 [2024-10-16 09:29:04.142266] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.951 [2024-10-16 09:29:04.142270] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79fc0) on tqpair=0xc15750 00:14:39.951 [2024-10-16 09:29:04.142286] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.951 [2024-10-16 09:29:04.142294] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.951 [2024-10-16 09:29:04.142297] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.951 [2024-10-16 09:29:04.142301] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79e40) on tqpair=0xc15750 00:14:39.951 [2024-10-16 09:29:04.142313] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.951 [2024-10-16 09:29:04.142319] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.951 [2024-10-16 09:29:04.142323] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.951 [2024-10-16 09:29:04.142327] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7a140) on tqpair=0xc15750 00:14:39.951 [2024-10-16 09:29:04.142335] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.951 [2024-10-16 09:29:04.142341] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.951 [2024-10-16 09:29:04.142344] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.951 [2024-10-16 09:29:04.142349] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7a2c0) on tqpair=0xc15750 00:14:39.951 ===================================================== 00:14:39.951 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:39.951 ===================================================== 00:14:39.951 Controller Capabilities/Features 00:14:39.951 ================================ 00:14:39.952 Vendor ID: 8086 00:14:39.952 Subsystem Vendor ID: 8086 00:14:39.952 Serial Number: SPDK00000000000001 00:14:39.952 Model Number: SPDK bdev Controller 00:14:39.952 Firmware Version: 25.01 00:14:39.952 Recommended Arb Burst: 6 00:14:39.952 IEEE OUI Identifier: e4 d2 5c 00:14:39.952 Multi-path I/O 00:14:39.952 May have multiple subsystem ports: Yes 00:14:39.952 May have multiple controllers: Yes 00:14:39.952 Associated with SR-IOV VF: No 00:14:39.952 Max Data Transfer Size: 131072 00:14:39.952 Max Number of Namespaces: 32 00:14:39.952 Max Number of I/O Queues: 127 00:14:39.952 NVMe Specification Version (VS): 1.3 00:14:39.952 NVMe Specification Version (Identify): 1.3 00:14:39.952 Maximum Queue Entries: 128 00:14:39.952 Contiguous Queues Required: Yes 00:14:39.952 Arbitration Mechanisms Supported 00:14:39.952 Weighted Round Robin: Not Supported 00:14:39.952 Vendor Specific: Not Supported 00:14:39.952 Reset Timeout: 15000 ms 00:14:39.952 Doorbell Stride: 4 bytes 00:14:39.952 NVM Subsystem Reset: Not Supported 00:14:39.952 Command Sets Supported 00:14:39.952 NVM Command Set: Supported 00:14:39.952 Boot Partition: Not Supported 00:14:39.952 Memory Page Size Minimum: 4096 bytes 00:14:39.952 Memory Page Size Maximum: 4096 bytes 00:14:39.952 Persistent Memory Region: Not Supported 00:14:39.952 Optional Asynchronous Events Supported 00:14:39.952 Namespace Attribute Notices: Supported 00:14:39.952 Firmware Activation Notices: Not Supported 00:14:39.952 ANA Change Notices: Not Supported 00:14:39.952 PLE Aggregate Log Change Notices: Not Supported 00:14:39.952 LBA Status Info Alert Notices: Not Supported 00:14:39.952 EGE Aggregate Log Change Notices: Not Supported 00:14:39.952 Normal NVM Subsystem Shutdown event: Not Supported 00:14:39.952 Zone Descriptor Change Notices: Not Supported 00:14:39.952 Discovery Log Change Notices: Not Supported 00:14:39.952 Controller Attributes 00:14:39.952 128-bit Host Identifier: Supported 00:14:39.952 Non-Operational Permissive Mode: Not Supported 00:14:39.952 NVM Sets: Not Supported 00:14:39.952 Read Recovery Levels: Not Supported 00:14:39.952 Endurance Groups: Not Supported 00:14:39.952 Predictable Latency Mode: Not Supported 00:14:39.952 Traffic Based Keep ALive: Not Supported 00:14:39.952 Namespace Granularity: Not Supported 00:14:39.952 SQ Associations: Not Supported 00:14:39.952 UUID List: Not Supported 00:14:39.952 Multi-Domain Subsystem: Not Supported 00:14:39.952 Fixed Capacity Management: Not Supported 00:14:39.952 Variable Capacity Management: Not Supported 00:14:39.952 Delete Endurance Group: Not Supported 00:14:39.952 Delete NVM Set: Not Supported 00:14:39.952 Extended LBA Formats Supported: Not Supported 00:14:39.952 Flexible Data Placement Supported: Not Supported 00:14:39.952 00:14:39.952 Controller Memory Buffer Support 00:14:39.952 ================================ 00:14:39.952 Supported: No 00:14:39.952 00:14:39.952 Persistent Memory Region Support 00:14:39.952 ================================ 00:14:39.952 Supported: No 00:14:39.952 00:14:39.952 Admin Command Set Attributes 00:14:39.952 ============================ 00:14:39.952 Security Send/Receive: Not Supported 00:14:39.952 Format NVM: Not Supported 00:14:39.952 Firmware Activate/Download: Not Supported 00:14:39.952 Namespace Management: Not Supported 00:14:39.952 Device Self-Test: Not Supported 00:14:39.952 Directives: Not Supported 00:14:39.952 NVMe-MI: Not Supported 00:14:39.952 Virtualization Management: Not Supported 00:14:39.952 Doorbell Buffer Config: Not Supported 00:14:39.952 Get LBA Status Capability: Not Supported 00:14:39.952 Command & Feature Lockdown Capability: Not Supported 00:14:39.952 Abort Command Limit: 4 00:14:39.952 Async Event Request Limit: 4 00:14:39.952 Number of Firmware Slots: N/A 00:14:39.952 Firmware Slot 1 Read-Only: N/A 00:14:39.952 Firmware Activation Without Reset: N/A 00:14:39.952 Multiple Update Detection Support: N/A 00:14:39.952 Firmware Update Granularity: No Information Provided 00:14:39.952 Per-Namespace SMART Log: No 00:14:39.952 Asymmetric Namespace Access Log Page: Not Supported 00:14:39.952 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:14:39.952 Command Effects Log Page: Supported 00:14:39.952 Get Log Page Extended Data: Supported 00:14:39.952 Telemetry Log Pages: Not Supported 00:14:39.952 Persistent Event Log Pages: Not Supported 00:14:39.952 Supported Log Pages Log Page: May Support 00:14:39.952 Commands Supported & Effects Log Page: Not Supported 00:14:39.952 Feature Identifiers & Effects Log Page:May Support 00:14:39.952 NVMe-MI Commands & Effects Log Page: May Support 00:14:39.952 Data Area 4 for Telemetry Log: Not Supported 00:14:39.952 Error Log Page Entries Supported: 128 00:14:39.952 Keep Alive: Supported 00:14:39.952 Keep Alive Granularity: 10000 ms 00:14:39.952 00:14:39.952 NVM Command Set Attributes 00:14:39.952 ========================== 00:14:39.952 Submission Queue Entry Size 00:14:39.952 Max: 64 00:14:39.952 Min: 64 00:14:39.952 Completion Queue Entry Size 00:14:39.952 Max: 16 00:14:39.952 Min: 16 00:14:39.952 Number of Namespaces: 32 00:14:39.952 Compare Command: Supported 00:14:39.952 Write Uncorrectable Command: Not Supported 00:14:39.952 Dataset Management Command: Supported 00:14:39.952 Write Zeroes Command: Supported 00:14:39.952 Set Features Save Field: Not Supported 00:14:39.952 Reservations: Supported 00:14:39.952 Timestamp: Not Supported 00:14:39.952 Copy: Supported 00:14:39.952 Volatile Write Cache: Present 00:14:39.952 Atomic Write Unit (Normal): 1 00:14:39.952 Atomic Write Unit (PFail): 1 00:14:39.952 Atomic Compare & Write Unit: 1 00:14:39.952 Fused Compare & Write: Supported 00:14:39.952 Scatter-Gather List 00:14:39.952 SGL Command Set: Supported 00:14:39.952 SGL Keyed: Supported 00:14:39.952 SGL Bit Bucket Descriptor: Not Supported 00:14:39.952 SGL Metadata Pointer: Not Supported 00:14:39.952 Oversized SGL: Not Supported 00:14:39.952 SGL Metadata Address: Not Supported 00:14:39.952 SGL Offset: Supported 00:14:39.952 Transport SGL Data Block: Not Supported 00:14:39.952 Replay Protected Memory Block: Not Supported 00:14:39.952 00:14:39.952 Firmware Slot Information 00:14:39.952 ========================= 00:14:39.952 Active slot: 1 00:14:39.952 Slot 1 Firmware Revision: 25.01 00:14:39.952 00:14:39.952 00:14:39.952 Commands Supported and Effects 00:14:39.952 ============================== 00:14:39.952 Admin Commands 00:14:39.952 -------------- 00:14:39.952 Get Log Page (02h): Supported 00:14:39.952 Identify (06h): Supported 00:14:39.952 Abort (08h): Supported 00:14:39.952 Set Features (09h): Supported 00:14:39.952 Get Features (0Ah): Supported 00:14:39.952 Asynchronous Event Request (0Ch): Supported 00:14:39.952 Keep Alive (18h): Supported 00:14:39.952 I/O Commands 00:14:39.952 ------------ 00:14:39.952 Flush (00h): Supported LBA-Change 00:14:39.952 Write (01h): Supported LBA-Change 00:14:39.952 Read (02h): Supported 00:14:39.952 Compare (05h): Supported 00:14:39.952 Write Zeroes (08h): Supported LBA-Change 00:14:39.952 Dataset Management (09h): Supported LBA-Change 00:14:39.952 Copy (19h): Supported LBA-Change 00:14:39.952 00:14:39.952 Error Log 00:14:39.952 ========= 00:14:39.952 00:14:39.952 Arbitration 00:14:39.952 =========== 00:14:39.952 Arbitration Burst: 1 00:14:39.952 00:14:39.952 Power Management 00:14:39.952 ================ 00:14:39.952 Number of Power States: 1 00:14:39.952 Current Power State: Power State #0 00:14:39.952 Power State #0: 00:14:39.952 Max Power: 0.00 W 00:14:39.952 Non-Operational State: Operational 00:14:39.952 Entry Latency: Not Reported 00:14:39.952 Exit Latency: Not Reported 00:14:39.952 Relative Read Throughput: 0 00:14:39.952 Relative Read Latency: 0 00:14:39.952 Relative Write Throughput: 0 00:14:39.952 Relative Write Latency: 0 00:14:39.952 Idle Power: Not Reported 00:14:39.952 Active Power: Not Reported 00:14:39.952 Non-Operational Permissive Mode: Not Supported 00:14:39.952 00:14:39.952 Health Information 00:14:39.952 ================== 00:14:39.952 Critical Warnings: 00:14:39.952 Available Spare Space: OK 00:14:39.952 Temperature: OK 00:14:39.952 Device Reliability: OK 00:14:39.952 Read Only: No 00:14:39.952 Volatile Memory Backup: OK 00:14:39.952 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:39.952 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:39.952 Available Spare: 0% 00:14:39.952 Available Spare Threshold: 0% 00:14:39.952 Life Percentage Used:[2024-10-16 09:29:04.142449] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.952 [2024-10-16 09:29:04.142457] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xc15750) 00:14:39.952 [2024-10-16 09:29:04.142465] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.952 [2024-10-16 09:29:04.142488] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc7a2c0, cid 7, qid 0 00:14:39.952 [2024-10-16 09:29:04.145596] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.952 [2024-10-16 09:29:04.145618] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.952 [2024-10-16 09:29:04.145623] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.952 [2024-10-16 09:29:04.145628] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc7a2c0) on tqpair=0xc15750 00:14:39.953 [2024-10-16 09:29:04.145689] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:14:39.953 [2024-10-16 09:29:04.145707] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79840) on tqpair=0xc15750 00:14:39.953 [2024-10-16 09:29:04.145715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:39.953 [2024-10-16 09:29:04.145721] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc799c0) on tqpair=0xc15750 00:14:39.953 [2024-10-16 09:29:04.145735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:39.953 [2024-10-16 09:29:04.145741] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79b40) on tqpair=0xc15750 00:14:39.953 [2024-10-16 09:29:04.145746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:39.953 [2024-10-16 09:29:04.145752] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79cc0) on tqpair=0xc15750 00:14:39.953 [2024-10-16 09:29:04.145757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:39.953 [2024-10-16 09:29:04.145767] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.953 [2024-10-16 09:29:04.145772] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.953 [2024-10-16 09:29:04.145776] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc15750) 00:14:39.953 [2024-10-16 09:29:04.145785] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.953 [2024-10-16 09:29:04.145814] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79cc0, cid 3, qid 0 00:14:39.953 [2024-10-16 09:29:04.145874] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.953 [2024-10-16 09:29:04.145888] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.953 [2024-10-16 09:29:04.145893] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.953 [2024-10-16 09:29:04.145897] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79cc0) on tqpair=0xc15750 00:14:39.953 [2024-10-16 09:29:04.145906] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.953 [2024-10-16 09:29:04.145910] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.953 [2024-10-16 09:29:04.145930] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc15750) 00:14:39.953 [2024-10-16 09:29:04.145938] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.953 [2024-10-16 09:29:04.145961] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79cc0, cid 3, qid 0 00:14:39.953 [2024-10-16 09:29:04.146026] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.953 [2024-10-16 09:29:04.146037] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.953 [2024-10-16 09:29:04.146042] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.953 [2024-10-16 09:29:04.146046] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79cc0) on tqpair=0xc15750 00:14:39.953 [2024-10-16 09:29:04.146051] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:14:39.953 [2024-10-16 09:29:04.146057] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:14:39.953 [2024-10-16 09:29:04.146067] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.953 [2024-10-16 09:29:04.146072] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.953 [2024-10-16 09:29:04.146076] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc15750) 00:14:39.953 [2024-10-16 09:29:04.146083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.953 [2024-10-16 09:29:04.146102] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79cc0, cid 3, qid 0 00:14:39.953 [2024-10-16 09:29:04.146149] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.953 [2024-10-16 09:29:04.146156] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.953 [2024-10-16 09:29:04.146160] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.953 [2024-10-16 09:29:04.146164] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79cc0) on tqpair=0xc15750 00:14:39.953 [2024-10-16 09:29:04.146175] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.953 [2024-10-16 09:29:04.146180] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.953 [2024-10-16 09:29:04.146184] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc15750) 00:14:39.953 [2024-10-16 09:29:04.146192] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.953 [2024-10-16 09:29:04.146209] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79cc0, cid 3, qid 0 00:14:39.953 [2024-10-16 09:29:04.146255] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.953 [2024-10-16 09:29:04.146262] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.953 [2024-10-16 09:29:04.146266] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.953 [2024-10-16 09:29:04.146270] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79cc0) on tqpair=0xc15750 00:14:39.953 [2024-10-16 09:29:04.146281] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.953 [2024-10-16 09:29:04.146285] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.953 [2024-10-16 09:29:04.146289] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc15750) 00:14:39.953 [2024-10-16 09:29:04.146297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.953 [2024-10-16 09:29:04.146314] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79cc0, cid 3, qid 0 00:14:39.953 [2024-10-16 09:29:04.146357] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.953 [2024-10-16 09:29:04.146373] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.953 [2024-10-16 09:29:04.146377] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.953 [2024-10-16 09:29:04.146382] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79cc0) on tqpair=0xc15750 00:14:39.953 [2024-10-16 09:29:04.146393] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.953 [2024-10-16 09:29:04.146398] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.953 [2024-10-16 09:29:04.146402] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc15750) 00:14:39.953 [2024-10-16 09:29:04.146410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.953 [2024-10-16 09:29:04.146428] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79cc0, cid 3, qid 0 00:14:39.953 [2024-10-16 09:29:04.146481] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.953 [2024-10-16 09:29:04.146492] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.953 [2024-10-16 09:29:04.146497] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.953 [2024-10-16 09:29:04.146501] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79cc0) on tqpair=0xc15750 00:14:39.953 [2024-10-16 09:29:04.146512] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.953 [2024-10-16 09:29:04.146517] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.953 [2024-10-16 09:29:04.146521] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc15750) 00:14:39.953 [2024-10-16 09:29:04.146529] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.953 [2024-10-16 09:29:04.146575] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79cc0, cid 3, qid 0 00:14:39.953 [2024-10-16 09:29:04.146632] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.953 [2024-10-16 09:29:04.146639] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.953 [2024-10-16 09:29:04.146644] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.953 [2024-10-16 09:29:04.146648] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79cc0) on tqpair=0xc15750 00:14:39.953 [2024-10-16 09:29:04.146659] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.953 [2024-10-16 09:29:04.146664] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.953 [2024-10-16 09:29:04.146668] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc15750) 00:14:39.953 [2024-10-16 09:29:04.146676] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.953 [2024-10-16 09:29:04.146695] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79cc0, cid 3, qid 0 00:14:39.953 [2024-10-16 09:29:04.146746] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.953 [2024-10-16 09:29:04.146753] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.953 [2024-10-16 09:29:04.146757] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.953 [2024-10-16 09:29:04.146762] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79cc0) on tqpair=0xc15750 00:14:39.953 [2024-10-16 09:29:04.146783] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.953 [2024-10-16 09:29:04.146788] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.953 [2024-10-16 09:29:04.146792] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc15750) 00:14:39.953 [2024-10-16 09:29:04.146800] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.953 [2024-10-16 09:29:04.146817] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79cc0, cid 3, qid 0 00:14:39.953 [2024-10-16 09:29:04.146865] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.953 [2024-10-16 09:29:04.146872] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.953 [2024-10-16 09:29:04.146876] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.954 [2024-10-16 09:29:04.146881] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79cc0) on tqpair=0xc15750 00:14:39.954 [2024-10-16 09:29:04.146891] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.954 [2024-10-16 09:29:04.146896] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.954 [2024-10-16 09:29:04.146901] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc15750) 00:14:39.954 [2024-10-16 09:29:04.146908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.954 [2024-10-16 09:29:04.146940] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79cc0, cid 3, qid 0 00:14:39.954 [2024-10-16 09:29:04.146987] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.954 [2024-10-16 09:29:04.146994] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.954 [2024-10-16 09:29:04.146998] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.954 [2024-10-16 09:29:04.147002] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79cc0) on tqpair=0xc15750 00:14:39.954 [2024-10-16 09:29:04.147012] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.954 [2024-10-16 09:29:04.147017] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.954 [2024-10-16 09:29:04.147021] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc15750) 00:14:39.954 [2024-10-16 09:29:04.147029] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.954 [2024-10-16 09:29:04.147046] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79cc0, cid 3, qid 0 00:14:39.954 [2024-10-16 09:29:04.147089] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.954 [2024-10-16 09:29:04.147096] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.954 [2024-10-16 09:29:04.147100] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.954 [2024-10-16 09:29:04.147104] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79cc0) on tqpair=0xc15750 00:14:39.954 [2024-10-16 09:29:04.147115] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.954 [2024-10-16 09:29:04.147119] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.954 [2024-10-16 09:29:04.147123] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc15750) 00:14:39.954 [2024-10-16 09:29:04.147131] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.954 [2024-10-16 09:29:04.147147] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79cc0, cid 3, qid 0 00:14:39.954 [2024-10-16 09:29:04.147188] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.954 [2024-10-16 09:29:04.147195] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.954 [2024-10-16 09:29:04.147199] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.954 [2024-10-16 09:29:04.147203] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79cc0) on tqpair=0xc15750 00:14:39.954 [2024-10-16 09:29:04.147213] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.954 [2024-10-16 09:29:04.147218] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.954 [2024-10-16 09:29:04.147222] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc15750) 00:14:39.954 [2024-10-16 09:29:04.147229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.954 [2024-10-16 09:29:04.147246] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79cc0, cid 3, qid 0 00:14:39.954 [2024-10-16 09:29:04.147286] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.954 [2024-10-16 09:29:04.147293] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.954 [2024-10-16 09:29:04.147297] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.954 [2024-10-16 09:29:04.147301] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79cc0) on tqpair=0xc15750 00:14:39.954 [2024-10-16 09:29:04.147312] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.954 [2024-10-16 09:29:04.147316] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.954 [2024-10-16 09:29:04.147320] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc15750) 00:14:39.954 [2024-10-16 09:29:04.147328] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.954 [2024-10-16 09:29:04.147344] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79cc0, cid 3, qid 0 00:14:39.954 [2024-10-16 09:29:04.147388] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.954 [2024-10-16 09:29:04.147395] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.954 [2024-10-16 09:29:04.147399] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.954 [2024-10-16 09:29:04.147403] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79cc0) on tqpair=0xc15750 00:14:39.954 [2024-10-16 09:29:04.147413] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.954 [2024-10-16 09:29:04.147418] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.954 [2024-10-16 09:29:04.147422] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc15750) 00:14:39.954 [2024-10-16 09:29:04.147429] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.954 [2024-10-16 09:29:04.147446] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79cc0, cid 3, qid 0 00:14:39.954 [2024-10-16 09:29:04.147487] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.954 [2024-10-16 09:29:04.147498] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.954 [2024-10-16 09:29:04.147503] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.954 [2024-10-16 09:29:04.147507] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79cc0) on tqpair=0xc15750 00:14:39.954 [2024-10-16 09:29:04.147518] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.954 [2024-10-16 09:29:04.147523] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.954 [2024-10-16 09:29:04.147527] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc15750) 00:14:39.954 [2024-10-16 09:29:04.147534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.954 [2024-10-16 09:29:04.147578] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79cc0, cid 3, qid 0 00:14:39.954 [2024-10-16 09:29:04.147628] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.954 [2024-10-16 09:29:04.147636] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.954 [2024-10-16 09:29:04.147640] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.954 [2024-10-16 09:29:04.147644] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79cc0) on tqpair=0xc15750 00:14:39.954 [2024-10-16 09:29:04.147655] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.954 [2024-10-16 09:29:04.147660] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.954 [2024-10-16 09:29:04.147664] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc15750) 00:14:39.954 [2024-10-16 09:29:04.147672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.954 [2024-10-16 09:29:04.147690] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79cc0, cid 3, qid 0 00:14:39.954 [2024-10-16 09:29:04.147738] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.954 [2024-10-16 09:29:04.147745] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.954 [2024-10-16 09:29:04.147749] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.954 [2024-10-16 09:29:04.147753] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79cc0) on tqpair=0xc15750 00:14:39.954 [2024-10-16 09:29:04.147764] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.954 [2024-10-16 09:29:04.147769] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.954 [2024-10-16 09:29:04.147773] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc15750) 00:14:39.954 [2024-10-16 09:29:04.147781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.954 [2024-10-16 09:29:04.147798] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79cc0, cid 3, qid 0 00:14:39.954 [2024-10-16 09:29:04.147844] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.954 [2024-10-16 09:29:04.147856] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.954 [2024-10-16 09:29:04.147861] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.954 [2024-10-16 09:29:04.147865] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79cc0) on tqpair=0xc15750 00:14:39.954 [2024-10-16 09:29:04.147876] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.954 [2024-10-16 09:29:04.147881] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.954 [2024-10-16 09:29:04.147885] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc15750) 00:14:39.954 [2024-10-16 09:29:04.147893] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.954 [2024-10-16 09:29:04.147911] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79cc0, cid 3, qid 0 00:14:39.954 [2024-10-16 09:29:04.147970] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.954 [2024-10-16 09:29:04.147977] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.954 [2024-10-16 09:29:04.147981] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.954 [2024-10-16 09:29:04.147985] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79cc0) on tqpair=0xc15750 00:14:39.954 [2024-10-16 09:29:04.147996] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.954 [2024-10-16 09:29:04.148000] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.954 [2024-10-16 09:29:04.148005] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc15750) 00:14:39.954 [2024-10-16 09:29:04.148012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.954 [2024-10-16 09:29:04.148029] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79cc0, cid 3, qid 0 00:14:39.954 [2024-10-16 09:29:04.148070] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.954 [2024-10-16 09:29:04.148077] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.954 [2024-10-16 09:29:04.148081] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.954 [2024-10-16 09:29:04.148085] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79cc0) on tqpair=0xc15750 00:14:39.954 [2024-10-16 09:29:04.148095] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.954 [2024-10-16 09:29:04.148100] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.954 [2024-10-16 09:29:04.148104] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc15750) 00:14:39.954 [2024-10-16 09:29:04.148112] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.954 [2024-10-16 09:29:04.148128] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79cc0, cid 3, qid 0 00:14:39.954 [2024-10-16 09:29:04.148174] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.954 [2024-10-16 09:29:04.148181] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.954 [2024-10-16 09:29:04.148185] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.954 [2024-10-16 09:29:04.148190] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79cc0) on tqpair=0xc15750 00:14:39.954 [2024-10-16 09:29:04.148200] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.954 [2024-10-16 09:29:04.148205] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.954 [2024-10-16 09:29:04.148209] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc15750) 00:14:39.954 [2024-10-16 09:29:04.148216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.955 [2024-10-16 09:29:04.148233] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79cc0, cid 3, qid 0 00:14:39.955 [2024-10-16 09:29:04.148279] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.955 [2024-10-16 09:29:04.148286] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.955 [2024-10-16 09:29:04.148290] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.955 [2024-10-16 09:29:04.148294] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79cc0) on tqpair=0xc15750 00:14:39.955 [2024-10-16 09:29:04.148305] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.955 [2024-10-16 09:29:04.148310] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.955 [2024-10-16 09:29:04.148313] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc15750) 00:14:39.955 [2024-10-16 09:29:04.148321] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.955 [2024-10-16 09:29:04.148338] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79cc0, cid 3, qid 0 00:14:39.955 [2024-10-16 09:29:04.148379] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.955 [2024-10-16 09:29:04.148385] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.955 [2024-10-16 09:29:04.148389] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.955 [2024-10-16 09:29:04.148394] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79cc0) on tqpair=0xc15750 00:14:39.955 [2024-10-16 09:29:04.148404] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.955 [2024-10-16 09:29:04.148409] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.955 [2024-10-16 09:29:04.148413] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc15750) 00:14:39.955 [2024-10-16 09:29:04.148420] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.955 [2024-10-16 09:29:04.148437] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79cc0, cid 3, qid 0 00:14:39.955 [2024-10-16 09:29:04.148483] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.955 [2024-10-16 09:29:04.148490] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.955 [2024-10-16 09:29:04.148494] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.955 [2024-10-16 09:29:04.148498] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79cc0) on tqpair=0xc15750 00:14:39.955 [2024-10-16 09:29:04.148508] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.955 [2024-10-16 09:29:04.148513] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.955 [2024-10-16 09:29:04.148517] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc15750) 00:14:39.955 [2024-10-16 09:29:04.148525] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.955 [2024-10-16 09:29:04.148541] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79cc0, cid 3, qid 0 00:14:39.955 [2024-10-16 09:29:04.148616] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.955 [2024-10-16 09:29:04.148625] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.955 [2024-10-16 09:29:04.148629] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.955 [2024-10-16 09:29:04.148634] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79cc0) on tqpair=0xc15750 00:14:39.955 [2024-10-16 09:29:04.148645] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.955 [2024-10-16 09:29:04.148650] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.955 [2024-10-16 09:29:04.148654] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc15750) 00:14:39.955 [2024-10-16 09:29:04.148661] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.955 [2024-10-16 09:29:04.148681] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79cc0, cid 3, qid 0 00:14:39.955 [2024-10-16 09:29:04.148723] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.955 [2024-10-16 09:29:04.148735] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.955 [2024-10-16 09:29:04.148740] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.955 [2024-10-16 09:29:04.148744] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79cc0) on tqpair=0xc15750 00:14:39.955 [2024-10-16 09:29:04.148755] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.955 [2024-10-16 09:29:04.148760] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.955 [2024-10-16 09:29:04.148764] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc15750) 00:14:39.955 [2024-10-16 09:29:04.148772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.955 [2024-10-16 09:29:04.148790] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79cc0, cid 3, qid 0 00:14:39.955 [2024-10-16 09:29:04.148832] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.955 [2024-10-16 09:29:04.148839] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.955 [2024-10-16 09:29:04.148843] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.955 [2024-10-16 09:29:04.148848] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79cc0) on tqpair=0xc15750 00:14:39.955 [2024-10-16 09:29:04.148858] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.955 [2024-10-16 09:29:04.148863] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.955 [2024-10-16 09:29:04.148868] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc15750) 00:14:39.955 [2024-10-16 09:29:04.148875] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.955 [2024-10-16 09:29:04.148892] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79cc0, cid 3, qid 0 00:14:39.955 [2024-10-16 09:29:04.148952] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.955 [2024-10-16 09:29:04.148959] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.955 [2024-10-16 09:29:04.148963] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.955 [2024-10-16 09:29:04.148967] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79cc0) on tqpair=0xc15750 00:14:39.955 [2024-10-16 09:29:04.148977] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.955 [2024-10-16 09:29:04.148982] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.955 [2024-10-16 09:29:04.148986] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc15750) 00:14:39.955 [2024-10-16 09:29:04.148994] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.955 [2024-10-16 09:29:04.149010] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79cc0, cid 3, qid 0 00:14:39.955 [2024-10-16 09:29:04.149051] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.955 [2024-10-16 09:29:04.149058] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.955 [2024-10-16 09:29:04.149062] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.955 [2024-10-16 09:29:04.149066] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79cc0) on tqpair=0xc15750 00:14:39.955 [2024-10-16 09:29:04.149076] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.955 [2024-10-16 09:29:04.149081] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.955 [2024-10-16 09:29:04.149085] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc15750) 00:14:39.955 [2024-10-16 09:29:04.149093] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.955 [2024-10-16 09:29:04.149109] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79cc0, cid 3, qid 0 00:14:39.955 [2024-10-16 09:29:04.149158] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.955 [2024-10-16 09:29:04.149165] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.955 [2024-10-16 09:29:04.149169] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.955 [2024-10-16 09:29:04.149174] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79cc0) on tqpair=0xc15750 00:14:39.955 [2024-10-16 09:29:04.149211] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.955 [2024-10-16 09:29:04.149217] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.955 [2024-10-16 09:29:04.149221] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc15750) 00:14:39.955 [2024-10-16 09:29:04.149229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.955 [2024-10-16 09:29:04.149248] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79cc0, cid 3, qid 0 00:14:39.955 [2024-10-16 09:29:04.149296] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.955 [2024-10-16 09:29:04.149307] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.955 [2024-10-16 09:29:04.149311] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.955 [2024-10-16 09:29:04.149315] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79cc0) on tqpair=0xc15750 00:14:39.955 [2024-10-16 09:29:04.149326] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.955 [2024-10-16 09:29:04.149332] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.955 [2024-10-16 09:29:04.149336] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc15750) 00:14:39.955 [2024-10-16 09:29:04.149343] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.955 [2024-10-16 09:29:04.149362] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79cc0, cid 3, qid 0 00:14:39.955 [2024-10-16 09:29:04.149409] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.955 [2024-10-16 09:29:04.149420] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.955 [2024-10-16 09:29:04.149424] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.955 [2024-10-16 09:29:04.149429] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79cc0) on tqpair=0xc15750 00:14:39.955 [2024-10-16 09:29:04.149440] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.955 [2024-10-16 09:29:04.149445] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.955 [2024-10-16 09:29:04.149449] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc15750) 00:14:39.955 [2024-10-16 09:29:04.149457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.955 [2024-10-16 09:29:04.149475] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79cc0, cid 3, qid 0 00:14:39.955 [2024-10-16 09:29:04.149523] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.955 [2024-10-16 09:29:04.149530] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.955 [2024-10-16 09:29:04.149534] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.955 [2024-10-16 09:29:04.152601] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79cc0) on tqpair=0xc15750 00:14:39.955 [2024-10-16 09:29:04.152621] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:39.955 [2024-10-16 09:29:04.152627] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:39.955 [2024-10-16 09:29:04.152631] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc15750) 00:14:39.955 [2024-10-16 09:29:04.152640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:39.955 [2024-10-16 09:29:04.152665] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc79cc0, cid 3, qid 0 00:14:39.955 [2024-10-16 09:29:04.152715] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:39.956 [2024-10-16 09:29:04.152723] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:39.956 [2024-10-16 09:29:04.152727] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:39.956 [2024-10-16 09:29:04.152731] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc79cc0) on tqpair=0xc15750 00:14:39.956 [2024-10-16 09:29:04.152740] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:14:39.956 0% 00:14:39.956 Data Units Read: 0 00:14:39.956 Data Units Written: 0 00:14:39.956 Host Read Commands: 0 00:14:39.956 Host Write Commands: 0 00:14:39.956 Controller Busy Time: 0 minutes 00:14:39.956 Power Cycles: 0 00:14:39.956 Power On Hours: 0 hours 00:14:39.956 Unsafe Shutdowns: 0 00:14:39.956 Unrecoverable Media Errors: 0 00:14:39.956 Lifetime Error Log Entries: 0 00:14:39.956 Warning Temperature Time: 0 minutes 00:14:39.956 Critical Temperature Time: 0 minutes 00:14:39.956 00:14:39.956 Number of Queues 00:14:39.956 ================ 00:14:39.956 Number of I/O Submission Queues: 127 00:14:39.956 Number of I/O Completion Queues: 127 00:14:39.956 00:14:39.956 Active Namespaces 00:14:39.956 ================= 00:14:39.956 Namespace ID:1 00:14:39.956 Error Recovery Timeout: Unlimited 00:14:39.956 Command Set Identifier: NVM (00h) 00:14:39.956 Deallocate: Supported 00:14:39.956 Deallocated/Unwritten Error: Not Supported 00:14:39.956 Deallocated Read Value: Unknown 00:14:39.956 Deallocate in Write Zeroes: Not Supported 00:14:39.956 Deallocated Guard Field: 0xFFFF 00:14:39.956 Flush: Supported 00:14:39.956 Reservation: Supported 00:14:39.956 Namespace Sharing Capabilities: Multiple Controllers 00:14:39.956 Size (in LBAs): 131072 (0GiB) 00:14:39.956 Capacity (in LBAs): 131072 (0GiB) 00:14:39.956 Utilization (in LBAs): 131072 (0GiB) 00:14:39.956 NGUID: ABCDEF0123456789ABCDEF0123456789 00:14:39.956 EUI64: ABCDEF0123456789 00:14:39.956 UUID: d0139ee6-84cd-4084-b535-a604791814b9 00:14:39.956 Thin Provisioning: Not Supported 00:14:39.956 Per-NS Atomic Units: Yes 00:14:39.956 Atomic Boundary Size (Normal): 0 00:14:39.956 Atomic Boundary Size (PFail): 0 00:14:39.956 Atomic Boundary Offset: 0 00:14:39.956 Maximum Single Source Range Length: 65535 00:14:39.956 Maximum Copy Length: 65535 00:14:39.956 Maximum Source Range Count: 1 00:14:39.956 NGUID/EUI64 Never Reused: No 00:14:39.956 Namespace Write Protected: No 00:14:39.956 Number of LBA Formats: 1 00:14:39.956 Current LBA Format: LBA Format #00 00:14:39.956 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:39.956 00:14:39.956 09:29:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:14:39.956 09:29:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:39.956 09:29:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.956 09:29:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:39.956 09:29:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.956 09:29:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:14:39.956 09:29:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:14:39.956 09:29:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:39.956 09:29:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:14:39.956 09:29:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:39.956 09:29:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:14:39.956 09:29:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:39.956 09:29:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:39.956 rmmod nvme_tcp 00:14:39.956 rmmod nvme_fabrics 00:14:39.956 rmmod nvme_keyring 00:14:39.956 09:29:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:39.956 09:29:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:14:39.956 09:29:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:14:39.956 09:29:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@515 -- # '[' -n 73614 ']' 00:14:39.956 09:29:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # killprocess 73614 00:14:39.956 09:29:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 73614 ']' 00:14:39.956 09:29:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 73614 00:14:39.956 09:29:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:14:39.956 09:29:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:39.956 09:29:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73614 00:14:39.956 killing process with pid 73614 00:14:39.956 09:29:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:39.956 09:29:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:39.956 09:29:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73614' 00:14:39.956 09:29:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 73614 00:14:39.956 09:29:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 73614 00:14:40.215 09:29:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:40.215 09:29:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:40.215 09:29:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:40.215 09:29:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:14:40.215 09:29:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-save 00:14:40.215 09:29:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-restore 00:14:40.215 09:29:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:40.215 09:29:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:40.215 09:29:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:40.215 09:29:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:40.215 09:29:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:40.215 09:29:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:40.215 09:29:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:40.215 09:29:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:40.215 09:29:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:40.215 09:29:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:40.215 09:29:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:40.215 09:29:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:40.474 09:29:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:40.474 09:29:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:40.474 09:29:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:40.474 09:29:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:40.474 09:29:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:40.474 09:29:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:40.474 09:29:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:40.474 09:29:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:40.474 09:29:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:14:40.474 00:14:40.474 real 0m2.226s 00:14:40.474 user 0m4.480s 00:14:40.474 sys 0m0.778s 00:14:40.474 ************************************ 00:14:40.474 END TEST nvmf_identify 00:14:40.474 ************************************ 00:14:40.474 09:29:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:40.474 09:29:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:40.474 09:29:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:40.474 09:29:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:40.474 09:29:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:40.474 09:29:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:40.474 ************************************ 00:14:40.474 START TEST nvmf_perf 00:14:40.474 ************************************ 00:14:40.474 09:29:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:40.735 * Looking for test storage... 00:14:40.735 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:40.735 09:29:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:40.735 09:29:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:14:40.735 09:29:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:40.735 09:29:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:40.735 09:29:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:40.735 09:29:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:40.735 09:29:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:40.735 09:29:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:14:40.735 09:29:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:14:40.735 09:29:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:14:40.735 09:29:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:14:40.735 09:29:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:14:40.735 09:29:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:14:40.735 09:29:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:14:40.735 09:29:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:40.735 09:29:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:14:40.735 09:29:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:14:40.735 09:29:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:40.735 09:29:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:40.735 09:29:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:14:40.735 09:29:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:14:40.735 09:29:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:40.735 09:29:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:14:40.735 09:29:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:14:40.735 09:29:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:14:40.735 09:29:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:14:40.735 09:29:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:40.735 09:29:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:14:40.735 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:14:40.735 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:40.735 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:40.735 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:14:40.735 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:40.735 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:40.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.735 --rc genhtml_branch_coverage=1 00:14:40.735 --rc genhtml_function_coverage=1 00:14:40.735 --rc genhtml_legend=1 00:14:40.735 --rc geninfo_all_blocks=1 00:14:40.735 --rc geninfo_unexecuted_blocks=1 00:14:40.735 00:14:40.735 ' 00:14:40.735 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:40.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.735 --rc genhtml_branch_coverage=1 00:14:40.735 --rc genhtml_function_coverage=1 00:14:40.735 --rc genhtml_legend=1 00:14:40.735 --rc geninfo_all_blocks=1 00:14:40.735 --rc geninfo_unexecuted_blocks=1 00:14:40.735 00:14:40.735 ' 00:14:40.735 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:40.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.735 --rc genhtml_branch_coverage=1 00:14:40.735 --rc genhtml_function_coverage=1 00:14:40.735 --rc genhtml_legend=1 00:14:40.735 --rc geninfo_all_blocks=1 00:14:40.735 --rc geninfo_unexecuted_blocks=1 00:14:40.735 00:14:40.735 ' 00:14:40.735 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:40.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.735 --rc genhtml_branch_coverage=1 00:14:40.735 --rc genhtml_function_coverage=1 00:14:40.735 --rc genhtml_legend=1 00:14:40.735 --rc geninfo_all_blocks=1 00:14:40.735 --rc geninfo_unexecuted_blocks=1 00:14:40.735 00:14:40.735 ' 00:14:40.735 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:40.735 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:14:40.735 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:40.735 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:40.735 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:40.735 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:40.735 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:40.735 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:40.735 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:40.735 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:40.735 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:40.735 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:40.735 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:14:40.735 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5989d9e2-d339-420e-a2f4-bd87604f111f 00:14:40.735 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:40.735 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:40.735 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:40.735 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:40.735 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:40.735 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:14:40.735 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:40.735 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:40.735 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:40.735 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.735 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.735 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.735 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:14:40.735 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.735 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:14:40.735 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:40.735 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:40.735 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:40.735 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:40.735 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:40.735 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:40.735 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:40.735 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:40.736 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:40.736 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:40.736 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:40.736 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:40.736 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:40.736 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:14:40.736 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:40.736 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:40.736 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:40.736 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:40.736 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:40.736 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:40.736 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:40.736 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:40.736 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:14:40.736 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:14:40.736 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:14:40.736 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:14:40.736 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:14:40.736 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@458 -- # nvmf_veth_init 00:14:40.736 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:40.736 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:40.736 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:40.736 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:40.736 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:40.736 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:40.736 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:40.736 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:40.736 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:40.736 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:40.736 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:40.736 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:40.736 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:40.736 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:40.736 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:40.736 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:40.736 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:40.736 Cannot find device "nvmf_init_br" 00:14:40.736 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:14:40.736 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:40.736 Cannot find device "nvmf_init_br2" 00:14:40.736 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:14:40.736 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:40.736 Cannot find device "nvmf_tgt_br" 00:14:40.736 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:14:40.736 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:40.736 Cannot find device "nvmf_tgt_br2" 00:14:40.736 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:14:40.736 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:40.736 Cannot find device "nvmf_init_br" 00:14:40.736 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:14:40.736 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:40.736 Cannot find device "nvmf_init_br2" 00:14:40.736 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:14:40.736 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:40.736 Cannot find device "nvmf_tgt_br" 00:14:40.736 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:14:40.736 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:40.736 Cannot find device "nvmf_tgt_br2" 00:14:40.736 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:14:40.736 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:40.736 Cannot find device "nvmf_br" 00:14:40.736 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:14:40.736 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:40.995 Cannot find device "nvmf_init_if" 00:14:40.995 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:14:40.995 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:40.995 Cannot find device "nvmf_init_if2" 00:14:40.995 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:14:40.995 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:40.995 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:40.995 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:14:40.995 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:40.995 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:40.995 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:14:40.995 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:40.995 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:40.995 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:40.995 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:40.995 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:40.995 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:40.995 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:40.995 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:40.995 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:40.995 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:40.995 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:40.995 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:40.995 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:40.995 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:40.995 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:40.996 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:40.996 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:40.996 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:40.996 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:40.996 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:40.996 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:40.996 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:40.996 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:40.996 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:40.996 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:40.996 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:40.996 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:40.996 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:40.996 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:40.996 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:40.996 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:40.996 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:41.255 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:41.255 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:41.255 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.111 ms 00:14:41.255 00:14:41.255 --- 10.0.0.3 ping statistics --- 00:14:41.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.255 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:14:41.255 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:41.255 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:41.255 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:14:41.255 00:14:41.255 --- 10.0.0.4 ping statistics --- 00:14:41.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.255 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:14:41.255 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:41.255 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:41.255 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:14:41.255 00:14:41.255 --- 10.0.0.1 ping statistics --- 00:14:41.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.255 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:14:41.255 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:41.255 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:41.255 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:14:41.255 00:14:41.255 --- 10.0.0.2 ping statistics --- 00:14:41.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.255 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:14:41.255 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:41.255 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # return 0 00:14:41.255 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:41.255 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:41.255 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:41.255 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:41.255 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:41.255 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:41.255 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:41.255 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:14:41.255 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:41.255 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:41.256 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:41.256 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # nvmfpid=73864 00:14:41.256 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:41.256 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # waitforlisten 73864 00:14:41.256 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 73864 ']' 00:14:41.256 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:41.256 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:41.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:41.256 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:41.256 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:41.256 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:41.256 [2024-10-16 09:29:05.495788] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:14:41.256 [2024-10-16 09:29:05.495849] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:41.256 [2024-10-16 09:29:05.627536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:41.515 [2024-10-16 09:29:05.673014] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:41.515 [2024-10-16 09:29:05.673084] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:41.515 [2024-10-16 09:29:05.673095] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:41.515 [2024-10-16 09:29:05.673118] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:41.515 [2024-10-16 09:29:05.673126] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:41.515 [2024-10-16 09:29:05.674339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:41.515 [2024-10-16 09:29:05.674499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:41.515 [2024-10-16 09:29:05.674635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.515 [2024-10-16 09:29:05.674634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:41.515 [2024-10-16 09:29:05.727481] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:41.515 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:41.515 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:14:41.515 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:41.515 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:41.515 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:41.515 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:41.515 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:41.515 09:29:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:14:42.083 09:29:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:14:42.083 09:29:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:14:42.344 09:29:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:14:42.344 09:29:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:42.603 09:29:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:14:42.603 09:29:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:14:42.603 09:29:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:14:42.603 09:29:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:14:42.603 09:29:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:42.862 [2024-10-16 09:29:07.133083] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:42.862 09:29:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:43.119 09:29:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:43.119 09:29:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:43.378 09:29:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:43.379 09:29:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:14:43.637 09:29:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:43.896 [2024-10-16 09:29:08.162370] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:43.896 09:29:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:14:44.155 09:29:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:14:44.155 09:29:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:44.155 09:29:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:14:44.155 09:29:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:45.092 Initializing NVMe Controllers 00:14:45.092 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:45.092 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:14:45.092 Initialization complete. Launching workers. 00:14:45.092 ======================================================== 00:14:45.092 Latency(us) 00:14:45.092 Device Information : IOPS MiB/s Average min max 00:14:45.092 PCIE (0000:00:10.0) NSID 1 from core 0: 21632.00 84.50 1478.53 344.67 8166.18 00:14:45.092 ======================================================== 00:14:45.092 Total : 21632.00 84.50 1478.53 344.67 8166.18 00:14:45.092 00:14:45.351 09:29:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:46.729 Initializing NVMe Controllers 00:14:46.729 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:46.729 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:46.729 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:46.729 Initialization complete. Launching workers. 00:14:46.729 ======================================================== 00:14:46.729 Latency(us) 00:14:46.729 Device Information : IOPS MiB/s Average min max 00:14:46.729 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4100.50 16.02 242.59 92.61 7160.36 00:14:46.729 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.74 0.48 8144.91 6979.62 11985.30 00:14:46.729 ======================================================== 00:14:46.729 Total : 4224.24 16.50 474.08 92.61 11985.30 00:14:46.729 00:14:46.729 09:29:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:47.665 Initializing NVMe Controllers 00:14:47.665 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:47.665 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:47.665 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:47.665 Initialization complete. Launching workers. 00:14:47.665 ======================================================== 00:14:47.665 Latency(us) 00:14:47.665 Device Information : IOPS MiB/s Average min max 00:14:47.665 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9476.59 37.02 3377.36 512.88 9154.87 00:14:47.665 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3973.57 15.52 8065.47 6269.92 12226.96 00:14:47.665 ======================================================== 00:14:47.666 Total : 13450.16 52.54 4762.36 512.88 12226.96 00:14:47.666 00:14:47.924 09:29:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:14:47.924 09:29:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:50.458 Initializing NVMe Controllers 00:14:50.458 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:50.458 Controller IO queue size 128, less than required. 00:14:50.458 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:50.458 Controller IO queue size 128, less than required. 00:14:50.458 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:50.458 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:50.458 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:50.458 Initialization complete. Launching workers. 00:14:50.458 ======================================================== 00:14:50.459 Latency(us) 00:14:50.459 Device Information : IOPS MiB/s Average min max 00:14:50.459 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1884.98 471.24 68789.84 36498.92 106242.26 00:14:50.459 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 699.88 174.97 189490.79 64099.35 301720.86 00:14:50.459 ======================================================== 00:14:50.459 Total : 2584.85 646.21 101470.94 36498.92 301720.86 00:14:50.459 00:14:50.459 09:29:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:14:50.718 Initializing NVMe Controllers 00:14:50.718 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:50.718 Controller IO queue size 128, less than required. 00:14:50.718 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:50.718 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:14:50.718 Controller IO queue size 128, less than required. 00:14:50.718 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:50.718 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:14:50.718 WARNING: Some requested NVMe devices were skipped 00:14:50.718 No valid NVMe controllers or AIO or URING devices found 00:14:50.718 09:29:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:14:53.293 Initializing NVMe Controllers 00:14:53.293 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:53.293 Controller IO queue size 128, less than required. 00:14:53.293 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:53.293 Controller IO queue size 128, less than required. 00:14:53.293 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:53.293 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:53.293 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:53.293 Initialization complete. Launching workers. 00:14:53.293 00:14:53.293 ==================== 00:14:53.293 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:14:53.293 TCP transport: 00:14:53.293 polls: 8730 00:14:53.293 idle_polls: 4279 00:14:53.293 sock_completions: 4451 00:14:53.293 nvme_completions: 6585 00:14:53.293 submitted_requests: 9900 00:14:53.293 queued_requests: 1 00:14:53.293 00:14:53.293 ==================== 00:14:53.294 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:14:53.294 TCP transport: 00:14:53.294 polls: 9221 00:14:53.294 idle_polls: 5144 00:14:53.294 sock_completions: 4077 00:14:53.294 nvme_completions: 6409 00:14:53.294 submitted_requests: 9604 00:14:53.294 queued_requests: 1 00:14:53.294 ======================================================== 00:14:53.294 Latency(us) 00:14:53.294 Device Information : IOPS MiB/s Average min max 00:14:53.294 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1645.55 411.39 79941.74 50067.12 129846.07 00:14:53.294 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1601.56 400.39 80221.74 41578.33 136739.73 00:14:53.294 ======================================================== 00:14:53.294 Total : 3247.10 811.78 80079.84 41578.33 136739.73 00:14:53.294 00:14:53.294 09:29:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:14:53.294 09:29:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:53.553 09:29:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:14:53.553 09:29:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:14:53.553 09:29:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:14:53.553 09:29:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:53.553 09:29:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:14:53.553 09:29:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:53.553 09:29:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:14:53.553 09:29:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:53.553 09:29:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:53.553 rmmod nvme_tcp 00:14:53.553 rmmod nvme_fabrics 00:14:53.553 rmmod nvme_keyring 00:14:53.553 09:29:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:53.553 09:29:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:14:53.553 09:29:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:14:53.553 09:29:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@515 -- # '[' -n 73864 ']' 00:14:53.553 09:29:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # killprocess 73864 00:14:53.553 09:29:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 73864 ']' 00:14:53.553 09:29:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 73864 00:14:53.553 09:29:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:14:53.553 09:29:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:53.553 09:29:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73864 00:14:53.553 killing process with pid 73864 00:14:53.553 09:29:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:53.553 09:29:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:53.553 09:29:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73864' 00:14:53.553 09:29:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 73864 00:14:53.553 09:29:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 73864 00:14:53.812 09:29:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:53.812 09:29:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:53.812 09:29:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:53.812 09:29:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:14:53.812 09:29:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:53.812 09:29:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-restore 00:14:53.812 09:29:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-save 00:14:53.812 09:29:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:53.812 09:29:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:53.812 09:29:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:53.812 09:29:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:53.812 09:29:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:53.812 09:29:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:53.812 09:29:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:53.812 09:29:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:53.812 09:29:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:53.812 09:29:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:53.812 09:29:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:54.072 09:29:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:54.072 09:29:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:54.072 09:29:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:54.072 09:29:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:54.072 09:29:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:54.072 09:29:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:54.072 09:29:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:54.072 09:29:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:54.072 09:29:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:14:54.072 00:14:54.072 real 0m13.505s 00:14:54.072 user 0m48.792s 00:14:54.072 sys 0m3.977s 00:14:54.072 ************************************ 00:14:54.072 END TEST nvmf_perf 00:14:54.072 ************************************ 00:14:54.072 09:29:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:54.072 09:29:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:54.072 09:29:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:14:54.072 09:29:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:54.072 09:29:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:54.072 09:29:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:54.072 ************************************ 00:14:54.072 START TEST nvmf_fio_host 00:14:54.072 ************************************ 00:14:54.072 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:14:54.072 * Looking for test storage... 00:14:54.072 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:54.072 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:54.072 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:14:54.072 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:54.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.332 --rc genhtml_branch_coverage=1 00:14:54.332 --rc genhtml_function_coverage=1 00:14:54.332 --rc genhtml_legend=1 00:14:54.332 --rc geninfo_all_blocks=1 00:14:54.332 --rc geninfo_unexecuted_blocks=1 00:14:54.332 00:14:54.332 ' 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:54.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.332 --rc genhtml_branch_coverage=1 00:14:54.332 --rc genhtml_function_coverage=1 00:14:54.332 --rc genhtml_legend=1 00:14:54.332 --rc geninfo_all_blocks=1 00:14:54.332 --rc geninfo_unexecuted_blocks=1 00:14:54.332 00:14:54.332 ' 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:54.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.332 --rc genhtml_branch_coverage=1 00:14:54.332 --rc genhtml_function_coverage=1 00:14:54.332 --rc genhtml_legend=1 00:14:54.332 --rc geninfo_all_blocks=1 00:14:54.332 --rc geninfo_unexecuted_blocks=1 00:14:54.332 00:14:54.332 ' 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:54.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.332 --rc genhtml_branch_coverage=1 00:14:54.332 --rc genhtml_function_coverage=1 00:14:54.332 --rc genhtml_legend=1 00:14:54.332 --rc geninfo_all_blocks=1 00:14:54.332 --rc geninfo_unexecuted_blocks=1 00:14:54.332 00:14:54.332 ' 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5989d9e2-d339-420e-a2f4-bd87604f111f 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:54.332 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:54.333 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@458 -- # nvmf_veth_init 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:54.333 Cannot find device "nvmf_init_br" 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:54.333 Cannot find device "nvmf_init_br2" 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:54.333 Cannot find device "nvmf_tgt_br" 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:54.333 Cannot find device "nvmf_tgt_br2" 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:54.333 Cannot find device "nvmf_init_br" 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:54.333 Cannot find device "nvmf_init_br2" 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:54.333 Cannot find device "nvmf_tgt_br" 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:54.333 Cannot find device "nvmf_tgt_br2" 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:54.333 Cannot find device "nvmf_br" 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:54.333 Cannot find device "nvmf_init_if" 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:54.333 Cannot find device "nvmf_init_if2" 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:54.333 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:54.333 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:54.333 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:54.593 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:54.593 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:54.593 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:54.593 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:54.593 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:54.593 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:54.593 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:54.593 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:54.593 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:54.593 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:54.593 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:54.593 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:54.593 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:54.593 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:54.593 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:54.593 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:54.593 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:54.593 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:54.593 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:54.593 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:54.593 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:54.593 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:54.593 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:54.593 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:54.593 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:54.593 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:54.593 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:54.593 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:54.593 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:54.593 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:54.593 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:54.593 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:54.593 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:14:54.593 00:14:54.593 --- 10.0.0.3 ping statistics --- 00:14:54.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.593 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:14:54.593 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:54.593 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:54.593 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:14:54.593 00:14:54.593 --- 10.0.0.4 ping statistics --- 00:14:54.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.593 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:14:54.593 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:54.593 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:54.593 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:14:54.593 00:14:54.593 --- 10.0.0.1 ping statistics --- 00:14:54.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.593 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:14:54.593 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:54.593 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:54.593 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:14:54.593 00:14:54.593 --- 10.0.0.2 ping statistics --- 00:14:54.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.593 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:14:54.593 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:54.593 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # return 0 00:14:54.593 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:54.593 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:54.593 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:54.593 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:54.593 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:54.593 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:54.593 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:54.593 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:14:54.593 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:14:54.593 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:54.593 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:54.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:54.593 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=74316 00:14:54.593 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:54.593 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:54.593 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 74316 00:14:54.593 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 74316 ']' 00:14:54.593 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.593 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:54.593 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.593 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:54.593 09:29:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:54.852 [2024-10-16 09:29:19.015300] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:14:54.852 [2024-10-16 09:29:19.015568] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:54.852 [2024-10-16 09:29:19.158233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:54.852 [2024-10-16 09:29:19.212915] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:54.852 [2024-10-16 09:29:19.213287] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:54.852 [2024-10-16 09:29:19.213466] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:54.852 [2024-10-16 09:29:19.213672] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:54.852 [2024-10-16 09:29:19.213716] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:54.852 [2024-10-16 09:29:19.215126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:54.852 [2024-10-16 09:29:19.215241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:54.852 [2024-10-16 09:29:19.215324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.852 [2024-10-16 09:29:19.215322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:55.112 [2024-10-16 09:29:19.274018] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:55.112 09:29:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:55.112 09:29:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:14:55.112 09:29:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:55.370 [2024-10-16 09:29:19.612883] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:55.370 09:29:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:14:55.370 09:29:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:55.370 09:29:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:55.370 09:29:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:55.630 Malloc1 00:14:55.630 09:29:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:56.198 09:29:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:56.198 09:29:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:56.457 [2024-10-16 09:29:20.745036] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:56.457 09:29:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:14:56.716 09:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:14:56.716 09:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:14:56.716 09:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:14:56.716 09:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:14:56.716 09:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:56.716 09:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:14:56.716 09:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:56.716 09:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:14:56.716 09:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:14:56.716 09:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:56.716 09:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:56.716 09:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:14:56.716 09:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:56.716 09:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:14:56.716 09:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:14:56.716 09:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:56.716 09:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:56.716 09:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:14:56.716 09:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:56.716 09:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:14:56.716 09:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:14:56.716 09:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:56.716 09:29:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:14:56.975 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:14:56.975 fio-3.35 00:14:56.975 Starting 1 thread 00:14:59.510 00:14:59.510 test: (groupid=0, jobs=1): err= 0: pid=74387: Wed Oct 16 09:29:23 2024 00:14:59.510 read: IOPS=9383, BW=36.7MiB/s (38.4MB/s)(73.5MiB/2006msec) 00:14:59.510 slat (nsec): min=1817, max=333415, avg=2359.27, stdev=3305.72 00:14:59.510 clat (usec): min=3792, max=12378, avg=7110.82, stdev=550.19 00:14:59.510 lat (usec): min=3835, max=12380, avg=7113.18, stdev=550.09 00:14:59.510 clat percentiles (usec): 00:14:59.510 | 1.00th=[ 6063], 5.00th=[ 6325], 10.00th=[ 6521], 20.00th=[ 6652], 00:14:59.510 | 30.00th=[ 6849], 40.00th=[ 6915], 50.00th=[ 7046], 60.00th=[ 7177], 00:14:59.510 | 70.00th=[ 7308], 80.00th=[ 7504], 90.00th=[ 7767], 95.00th=[ 8029], 00:14:59.510 | 99.00th=[ 8586], 99.50th=[ 8979], 99.90th=[11469], 99.95th=[12125], 00:14:59.510 | 99.99th=[12387] 00:14:59.510 bw ( KiB/s): min=37072, max=37928, per=99.95%, avg=37514.00, stdev=361.59, samples=4 00:14:59.510 iops : min= 9268, max= 9482, avg=9378.50, stdev=90.40, samples=4 00:14:59.510 write: IOPS=9386, BW=36.7MiB/s (38.4MB/s)(73.6MiB/2006msec); 0 zone resets 00:14:59.510 slat (nsec): min=1884, max=1400.4k, avg=2483.16, stdev=10316.40 00:14:59.510 clat (usec): min=3642, max=12353, avg=6479.77, stdev=496.07 00:14:59.510 lat (usec): min=3655, max=12355, avg=6482.25, stdev=496.07 00:14:59.510 clat percentiles (usec): 00:14:59.510 | 1.00th=[ 5538], 5.00th=[ 5800], 10.00th=[ 5932], 20.00th=[ 6128], 00:14:59.510 | 30.00th=[ 6259], 40.00th=[ 6325], 50.00th=[ 6456], 60.00th=[ 6521], 00:14:59.510 | 70.00th=[ 6652], 80.00th=[ 6849], 90.00th=[ 7046], 95.00th=[ 7242], 00:14:59.510 | 99.00th=[ 7832], 99.50th=[ 8225], 99.90th=[ 9765], 99.95th=[11469], 00:14:59.510 | 99.99th=[12256] 00:14:59.510 bw ( KiB/s): min=36800, max=38144, per=99.98%, avg=37538.00, stdev=603.08, samples=4 00:14:59.510 iops : min= 9200, max= 9536, avg=9384.50, stdev=150.77, samples=4 00:14:59.510 lat (msec) : 4=0.02%, 10=99.87%, 20=0.11% 00:14:59.510 cpu : usr=69.23%, sys=24.09%, ctx=47, majf=0, minf=9 00:14:59.510 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:14:59.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:59.510 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:59.510 issued rwts: total=18823,18830,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:59.510 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:59.510 00:14:59.510 Run status group 0 (all jobs): 00:14:59.510 READ: bw=36.7MiB/s (38.4MB/s), 36.7MiB/s-36.7MiB/s (38.4MB/s-38.4MB/s), io=73.5MiB (77.1MB), run=2006-2006msec 00:14:59.510 WRITE: bw=36.7MiB/s (38.4MB/s), 36.7MiB/s-36.7MiB/s (38.4MB/s-38.4MB/s), io=73.6MiB (77.1MB), run=2006-2006msec 00:14:59.510 09:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:14:59.510 09:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:14:59.510 09:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:14:59.510 09:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:59.510 09:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:14:59.510 09:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:59.510 09:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:14:59.510 09:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:14:59.510 09:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:59.510 09:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:59.510 09:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:14:59.510 09:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:59.510 09:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:14:59.510 09:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:14:59.510 09:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:59.510 09:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:14:59.510 09:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:59.510 09:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:59.510 09:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:14:59.510 09:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:14:59.510 09:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:59.510 09:29:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:14:59.510 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:14:59.510 fio-3.35 00:14:59.510 Starting 1 thread 00:15:02.048 00:15:02.048 test: (groupid=0, jobs=1): err= 0: pid=74436: Wed Oct 16 09:29:26 2024 00:15:02.048 read: IOPS=8648, BW=135MiB/s (142MB/s)(271MiB/2005msec) 00:15:02.048 slat (usec): min=2, max=104, avg= 3.55, stdev= 2.09 00:15:02.048 clat (usec): min=2503, max=15801, avg=8176.22, stdev=2364.70 00:15:02.048 lat (usec): min=2507, max=15806, avg=8179.77, stdev=2364.78 00:15:02.048 clat percentiles (usec): 00:15:02.048 | 1.00th=[ 4080], 5.00th=[ 4817], 10.00th=[ 5342], 20.00th=[ 6128], 00:15:02.048 | 30.00th=[ 6718], 40.00th=[ 7308], 50.00th=[ 7898], 60.00th=[ 8586], 00:15:02.048 | 70.00th=[ 9241], 80.00th=[10028], 90.00th=[11469], 95.00th=[12780], 00:15:02.048 | 99.00th=[14484], 99.50th=[14746], 99.90th=[15533], 99.95th=[15664], 00:15:02.048 | 99.99th=[15795] 00:15:02.048 bw ( KiB/s): min=64096, max=77088, per=50.90%, avg=70432.00, stdev=7143.39, samples=4 00:15:02.048 iops : min= 4006, max= 4818, avg=4402.00, stdev=446.46, samples=4 00:15:02.048 write: IOPS=4993, BW=78.0MiB/s (81.8MB/s)(144MiB/1841msec); 0 zone resets 00:15:02.048 slat (usec): min=28, max=206, avg=35.20, stdev= 7.73 00:15:02.048 clat (usec): min=2451, max=20233, avg=11690.86, stdev=2121.72 00:15:02.048 lat (usec): min=2483, max=20264, avg=11726.07, stdev=2122.67 00:15:02.048 clat percentiles (usec): 00:15:02.048 | 1.00th=[ 7570], 5.00th=[ 8717], 10.00th=[ 9110], 20.00th=[ 9765], 00:15:02.048 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11469], 60.00th=[11994], 00:15:02.048 | 70.00th=[12649], 80.00th=[13435], 90.00th=[14484], 95.00th=[15533], 00:15:02.048 | 99.00th=[17171], 99.50th=[17695], 99.90th=[19530], 99.95th=[20055], 00:15:02.048 | 99.99th=[20317] 00:15:02.048 bw ( KiB/s): min=65952, max=79040, per=91.26%, avg=72912.00, stdev=7091.42, samples=4 00:15:02.048 iops : min= 4122, max= 4940, avg=4557.00, stdev=443.21, samples=4 00:15:02.048 lat (msec) : 4=0.56%, 10=59.51%, 20=39.91%, 50=0.02% 00:15:02.048 cpu : usr=76.38%, sys=18.68%, ctx=15, majf=0, minf=23 00:15:02.048 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:15:02.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:02.048 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:02.048 issued rwts: total=17341,9193,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:02.048 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:02.048 00:15:02.048 Run status group 0 (all jobs): 00:15:02.048 READ: bw=135MiB/s (142MB/s), 135MiB/s-135MiB/s (142MB/s-142MB/s), io=271MiB (284MB), run=2005-2005msec 00:15:02.048 WRITE: bw=78.0MiB/s (81.8MB/s), 78.0MiB/s-78.0MiB/s (81.8MB/s-81.8MB/s), io=144MiB (151MB), run=1841-1841msec 00:15:02.048 09:29:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:02.048 09:29:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:15:02.048 09:29:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:15:02.048 09:29:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:15:02.048 09:29:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:15:02.048 09:29:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:15:02.048 09:29:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:15:02.048 09:29:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:02.049 09:29:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:15:02.049 09:29:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:02.049 09:29:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:02.049 rmmod nvme_tcp 00:15:02.049 rmmod nvme_fabrics 00:15:02.049 rmmod nvme_keyring 00:15:02.049 09:29:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:02.049 09:29:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:15:02.049 09:29:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:15:02.049 09:29:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@515 -- # '[' -n 74316 ']' 00:15:02.049 09:29:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # killprocess 74316 00:15:02.049 09:29:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 74316 ']' 00:15:02.049 09:29:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 74316 00:15:02.049 09:29:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:15:02.049 09:29:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:02.049 09:29:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74316 00:15:02.049 killing process with pid 74316 00:15:02.049 09:29:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:02.049 09:29:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:02.049 09:29:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74316' 00:15:02.049 09:29:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 74316 00:15:02.049 09:29:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 74316 00:15:02.308 09:29:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:15:02.308 09:29:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:15:02.308 09:29:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:15:02.308 09:29:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:15:02.308 09:29:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-save 00:15:02.308 09:29:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:15:02.308 09:29:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-restore 00:15:02.308 09:29:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:02.308 09:29:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:02.308 09:29:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:02.308 09:29:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:02.567 09:29:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:02.567 09:29:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:02.567 09:29:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:02.567 09:29:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:02.567 09:29:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:02.567 09:29:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:02.567 09:29:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:02.567 09:29:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:02.567 09:29:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:02.567 09:29:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:02.567 09:29:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:02.567 09:29:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:02.567 09:29:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.567 09:29:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:02.567 09:29:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:02.567 09:29:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:15:02.567 ************************************ 00:15:02.567 END TEST nvmf_fio_host 00:15:02.567 ************************************ 00:15:02.567 00:15:02.567 real 0m8.541s 00:15:02.567 user 0m33.930s 00:15:02.567 sys 0m2.562s 00:15:02.567 09:29:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:02.567 09:29:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:02.567 09:29:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:02.567 09:29:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:02.567 09:29:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:02.567 09:29:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:02.827 ************************************ 00:15:02.827 START TEST nvmf_failover 00:15:02.827 ************************************ 00:15:02.827 09:29:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:02.827 * Looking for test storage... 00:15:02.827 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:02.827 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:02.827 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:15:02.827 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:02.827 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:02.827 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:02.827 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:02.827 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:02.827 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:15:02.827 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:15:02.827 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:15:02.827 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:15:02.827 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:15:02.827 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:15:02.827 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:15:02.827 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:02.827 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:15:02.827 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:15:02.827 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:02.827 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:02.827 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:15:02.827 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:15:02.827 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:02.827 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:15:02.827 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:15:02.827 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:15:02.827 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:15:02.827 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:02.827 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:15:02.827 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:15:02.827 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:02.827 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:02.827 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:15:02.827 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:02.827 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:02.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.827 --rc genhtml_branch_coverage=1 00:15:02.827 --rc genhtml_function_coverage=1 00:15:02.827 --rc genhtml_legend=1 00:15:02.827 --rc geninfo_all_blocks=1 00:15:02.827 --rc geninfo_unexecuted_blocks=1 00:15:02.827 00:15:02.827 ' 00:15:02.827 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:02.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.827 --rc genhtml_branch_coverage=1 00:15:02.827 --rc genhtml_function_coverage=1 00:15:02.827 --rc genhtml_legend=1 00:15:02.827 --rc geninfo_all_blocks=1 00:15:02.827 --rc geninfo_unexecuted_blocks=1 00:15:02.827 00:15:02.827 ' 00:15:02.827 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:02.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.827 --rc genhtml_branch_coverage=1 00:15:02.827 --rc genhtml_function_coverage=1 00:15:02.827 --rc genhtml_legend=1 00:15:02.827 --rc geninfo_all_blocks=1 00:15:02.827 --rc geninfo_unexecuted_blocks=1 00:15:02.827 00:15:02.827 ' 00:15:02.827 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:02.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.827 --rc genhtml_branch_coverage=1 00:15:02.827 --rc genhtml_function_coverage=1 00:15:02.827 --rc genhtml_legend=1 00:15:02.827 --rc geninfo_all_blocks=1 00:15:02.827 --rc geninfo_unexecuted_blocks=1 00:15:02.827 00:15:02.827 ' 00:15:02.827 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:02.827 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:15:02.827 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:02.827 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:02.827 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:02.827 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:02.827 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:02.827 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5989d9e2-d339-420e-a2f4-bd87604f111f 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:02.828 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # prepare_net_devs 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # local -g is_hw=no 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # remove_spdk_ns 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@458 -- # nvmf_veth_init 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:02.828 Cannot find device "nvmf_init_br" 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:02.828 Cannot find device "nvmf_init_br2" 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:02.828 Cannot find device "nvmf_tgt_br" 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:02.828 Cannot find device "nvmf_tgt_br2" 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:15:02.828 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:03.087 Cannot find device "nvmf_init_br" 00:15:03.087 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:15:03.087 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:03.087 Cannot find device "nvmf_init_br2" 00:15:03.087 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:15:03.087 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:03.087 Cannot find device "nvmf_tgt_br" 00:15:03.087 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:15:03.087 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:03.087 Cannot find device "nvmf_tgt_br2" 00:15:03.087 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:15:03.087 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:03.087 Cannot find device "nvmf_br" 00:15:03.087 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:15:03.087 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:03.087 Cannot find device "nvmf_init_if" 00:15:03.087 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:15:03.087 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:03.087 Cannot find device "nvmf_init_if2" 00:15:03.087 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:15:03.087 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:03.087 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:03.087 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:15:03.087 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:03.087 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:03.087 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:15:03.087 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:03.087 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:03.087 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:03.087 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:03.087 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:03.087 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:03.087 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:03.087 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:03.087 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:03.087 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:03.087 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:03.087 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:03.087 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:03.087 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:03.087 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:03.087 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:03.087 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:03.087 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:03.087 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:03.346 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:03.346 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:03.346 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:03.346 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:03.346 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:03.346 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:03.346 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:03.346 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:03.347 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:03.347 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:03.347 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:03.347 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:03.347 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:03.347 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:03.347 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:03.347 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.103 ms 00:15:03.347 00:15:03.347 --- 10.0.0.3 ping statistics --- 00:15:03.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:03.347 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:15:03.347 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:03.347 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:03.347 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:15:03.347 00:15:03.347 --- 10.0.0.4 ping statistics --- 00:15:03.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:03.347 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:15:03.347 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:03.347 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:03.347 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:15:03.347 00:15:03.347 --- 10.0.0.1 ping statistics --- 00:15:03.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:03.347 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:15:03.347 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:03.347 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:03.347 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:15:03.347 00:15:03.347 --- 10.0.0.2 ping statistics --- 00:15:03.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:03.347 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:15:03.347 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:03.347 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # return 0 00:15:03.347 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:15:03.347 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:03.347 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:15:03.347 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:15:03.347 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:03.347 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:15:03.347 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:15:03.347 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:15:03.347 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:15:03.347 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:03.347 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:03.347 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # nvmfpid=74695 00:15:03.347 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:03.347 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # waitforlisten 74695 00:15:03.347 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 74695 ']' 00:15:03.347 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:03.347 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:03.347 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:03.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:03.347 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:03.347 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:03.347 [2024-10-16 09:29:27.666807] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:15:03.347 [2024-10-16 09:29:27.666875] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:03.606 [2024-10-16 09:29:27.799504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:03.606 [2024-10-16 09:29:27.843724] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:03.606 [2024-10-16 09:29:27.843769] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:03.606 [2024-10-16 09:29:27.843778] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:03.606 [2024-10-16 09:29:27.843785] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:03.606 [2024-10-16 09:29:27.843792] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:03.606 [2024-10-16 09:29:27.844815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:03.606 [2024-10-16 09:29:27.845899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:03.606 [2024-10-16 09:29:27.845941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:03.606 [2024-10-16 09:29:27.899890] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:03.606 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:03.606 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:15:03.606 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:15:03.606 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:03.606 09:29:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:03.606 09:29:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:03.606 09:29:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:04.173 [2024-10-16 09:29:28.297851] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:04.173 09:29:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:04.432 Malloc0 00:15:04.432 09:29:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:04.691 09:29:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:04.950 09:29:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:05.209 [2024-10-16 09:29:29.409099] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:05.209 09:29:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:15:05.468 [2024-10-16 09:29:29.689309] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:15:05.468 09:29:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:15:05.728 [2024-10-16 09:29:29.949630] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:15:05.728 09:29:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=74751 00:15:05.728 09:29:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:15:05.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:05.728 09:29:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:05.728 09:29:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 74751 /var/tmp/bdevperf.sock 00:15:05.728 09:29:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 74751 ']' 00:15:05.728 09:29:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:05.728 09:29:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:05.728 09:29:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:05.728 09:29:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:05.728 09:29:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:05.987 09:29:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:05.987 09:29:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:15:05.987 09:29:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:06.246 NVMe0n1 00:15:06.246 09:29:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:06.814 00:15:06.814 09:29:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=74767 00:15:06.814 09:29:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:06.814 09:29:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:15:07.751 09:29:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:08.010 09:29:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:15:11.298 09:29:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:11.298 00:15:11.298 09:29:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:15:11.557 09:29:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:15:14.845 09:29:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:14.845 [2024-10-16 09:29:39.166455] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:14.845 09:29:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:15:15.781 09:29:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:15:16.357 09:29:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 74767 00:15:22.932 { 00:15:22.932 "results": [ 00:15:22.932 { 00:15:22.932 "job": "NVMe0n1", 00:15:22.932 "core_mask": "0x1", 00:15:22.932 "workload": "verify", 00:15:22.932 "status": "finished", 00:15:22.932 "verify_range": { 00:15:22.932 "start": 0, 00:15:22.932 "length": 16384 00:15:22.932 }, 00:15:22.932 "queue_depth": 128, 00:15:22.932 "io_size": 4096, 00:15:22.932 "runtime": 15.010876, 00:15:22.932 "iops": 9905.351293288946, 00:15:22.932 "mibps": 38.692778489409946, 00:15:22.932 "io_failed": 3621, 00:15:22.932 "io_timeout": 0, 00:15:22.932 "avg_latency_us": 12586.202974407886, 00:15:22.932 "min_latency_us": 528.7563636363636, 00:15:22.932 "max_latency_us": 16324.421818181818 00:15:22.932 } 00:15:22.932 ], 00:15:22.932 "core_count": 1 00:15:22.932 } 00:15:22.932 09:29:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 74751 00:15:22.932 09:29:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 74751 ']' 00:15:22.932 09:29:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 74751 00:15:22.932 09:29:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:15:22.932 09:29:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:22.932 09:29:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74751 00:15:22.932 killing process with pid 74751 00:15:22.932 09:29:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:22.932 09:29:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:22.932 09:29:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74751' 00:15:22.932 09:29:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 74751 00:15:22.932 09:29:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 74751 00:15:22.932 09:29:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:22.932 [2024-10-16 09:29:30.015460] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:15:22.932 [2024-10-16 09:29:30.015559] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74751 ] 00:15:22.932 [2024-10-16 09:29:30.152335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.932 [2024-10-16 09:29:30.206064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.932 [2024-10-16 09:29:30.264968] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:22.932 Running I/O for 15 seconds... 00:15:22.932 7701.00 IOPS, 30.08 MiB/s [2024-10-16T09:29:47.336Z] [2024-10-16 09:29:32.249426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.932 [2024-10-16 09:29:32.249481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.932 [2024-10-16 09:29:32.249526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:71968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.932 [2024-10-16 09:29:32.249557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.932 [2024-10-16 09:29:32.249586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:71976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.932 [2024-10-16 09:29:32.249602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.932 [2024-10-16 09:29:32.249617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:71984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.932 [2024-10-16 09:29:32.249644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.932 [2024-10-16 09:29:32.249658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:71992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.932 [2024-10-16 09:29:32.249671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.932 [2024-10-16 09:29:32.249685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:72000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.932 [2024-10-16 09:29:32.249697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.932 [2024-10-16 09:29:32.249711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:72008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.932 [2024-10-16 09:29:32.249724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.932 [2024-10-16 09:29:32.249737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:72016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.932 [2024-10-16 09:29:32.249750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.932 [2024-10-16 09:29:32.249764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:72864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.932 [2024-10-16 09:29:32.249776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.932 [2024-10-16 09:29:32.249790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:72024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.932 [2024-10-16 09:29:32.249803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.932 [2024-10-16 09:29:32.249816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.932 [2024-10-16 09:29:32.249857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.932 [2024-10-16 09:29:32.249873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:72040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.932 [2024-10-16 09:29:32.249886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.932 [2024-10-16 09:29:32.249900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:72048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.932 [2024-10-16 09:29:32.249912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.932 [2024-10-16 09:29:32.249926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:72056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.932 [2024-10-16 09:29:32.249938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.932 [2024-10-16 09:29:32.249952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:72064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.932 [2024-10-16 09:29:32.249964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.932 [2024-10-16 09:29:32.249977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:72072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.932 [2024-10-16 09:29:32.249989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.932 [2024-10-16 09:29:32.250011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:72080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.932 [2024-10-16 09:29:32.250024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.932 [2024-10-16 09:29:32.250037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:72088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.932 [2024-10-16 09:29:32.250049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.932 [2024-10-16 09:29:32.250063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:72096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.932 [2024-10-16 09:29:32.250075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.932 [2024-10-16 09:29:32.250088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:72104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.932 [2024-10-16 09:29:32.250101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.932 [2024-10-16 09:29:32.250114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:72112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.932 [2024-10-16 09:29:32.250126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.932 [2024-10-16 09:29:32.250140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:72120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.932 [2024-10-16 09:29:32.250152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.932 [2024-10-16 09:29:32.250166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:72128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.933 [2024-10-16 09:29:32.250178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.933 [2024-10-16 09:29:32.250199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:72136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.933 [2024-10-16 09:29:32.250212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.933 [2024-10-16 09:29:32.250226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:72144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.933 [2024-10-16 09:29:32.250238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.933 [2024-10-16 09:29:32.250252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:72152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.933 [2024-10-16 09:29:32.250264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.933 [2024-10-16 09:29:32.250278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:72160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.933 [2024-10-16 09:29:32.250290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.933 [2024-10-16 09:29:32.250304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:72168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.933 [2024-10-16 09:29:32.250317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.933 [2024-10-16 09:29:32.250331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:72176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.933 [2024-10-16 09:29:32.250342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.933 [2024-10-16 09:29:32.250356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.933 [2024-10-16 09:29:32.250368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.933 [2024-10-16 09:29:32.250382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:72192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.933 [2024-10-16 09:29:32.250394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.933 [2024-10-16 09:29:32.250408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:72200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.933 [2024-10-16 09:29:32.250420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.933 [2024-10-16 09:29:32.250449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:72208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.933 [2024-10-16 09:29:32.250464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.933 [2024-10-16 09:29:32.250482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.933 [2024-10-16 09:29:32.250495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.933 [2024-10-16 09:29:32.250512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:72224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.933 [2024-10-16 09:29:32.250526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.933 [2024-10-16 09:29:32.250549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:72232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.933 [2024-10-16 09:29:32.250572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.933 [2024-10-16 09:29:32.250587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:72240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.933 [2024-10-16 09:29:32.250600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.933 [2024-10-16 09:29:32.250614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:72248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.933 [2024-10-16 09:29:32.250627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.933 [2024-10-16 09:29:32.250640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:72256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.933 [2024-10-16 09:29:32.250653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.933 [2024-10-16 09:29:32.250666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:72264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.933 [2024-10-16 09:29:32.250678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.933 [2024-10-16 09:29:32.250692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:72272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.933 [2024-10-16 09:29:32.250704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.933 [2024-10-16 09:29:32.250718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:72280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.933 [2024-10-16 09:29:32.250730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.933 [2024-10-16 09:29:32.250743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:72288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.933 [2024-10-16 09:29:32.250764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.933 [2024-10-16 09:29:32.250778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:72296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.933 [2024-10-16 09:29:32.250791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.933 [2024-10-16 09:29:32.250804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:72304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.933 [2024-10-16 09:29:32.250817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.933 [2024-10-16 09:29:32.250830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:72312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.933 [2024-10-16 09:29:32.250842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.933 [2024-10-16 09:29:32.250856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:72320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.933 [2024-10-16 09:29:32.250868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.933 [2024-10-16 09:29:32.250881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:72328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.933 [2024-10-16 09:29:32.250893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.933 [2024-10-16 09:29:32.250908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.933 [2024-10-16 09:29:32.250926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.933 [2024-10-16 09:29:32.250940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:72344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.933 [2024-10-16 09:29:32.250953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.933 [2024-10-16 09:29:32.250966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:72352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.933 [2024-10-16 09:29:32.250978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.933 [2024-10-16 09:29:32.250992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:72360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.933 [2024-10-16 09:29:32.251004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.933 [2024-10-16 09:29:32.251018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:72368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.933 [2024-10-16 09:29:32.251030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.933 [2024-10-16 09:29:32.251044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:72376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.933 [2024-10-16 09:29:32.251056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.933 [2024-10-16 09:29:32.251070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:72384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.933 [2024-10-16 09:29:32.251082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.933 [2024-10-16 09:29:32.251096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:72392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.933 [2024-10-16 09:29:32.251108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.933 [2024-10-16 09:29:32.251121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:72400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.933 [2024-10-16 09:29:32.251133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.933 [2024-10-16 09:29:32.251147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:72408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.933 [2024-10-16 09:29:32.251159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.933 [2024-10-16 09:29:32.251173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:72416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.933 [2024-10-16 09:29:32.251190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.933 [2024-10-16 09:29:32.251204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:72424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.933 [2024-10-16 09:29:32.251216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.933 [2024-10-16 09:29:32.251230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:72432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.933 [2024-10-16 09:29:32.251242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.933 [2024-10-16 09:29:32.251262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:72440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.933 [2024-10-16 09:29:32.251275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.933 [2024-10-16 09:29:32.251289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:72448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.933 [2024-10-16 09:29:32.251302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.933 [2024-10-16 09:29:32.251315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.933 [2024-10-16 09:29:32.251328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.933 [2024-10-16 09:29:32.251342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:72464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.934 [2024-10-16 09:29:32.251354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.934 [2024-10-16 09:29:32.251367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:72472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.934 [2024-10-16 09:29:32.251380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.934 [2024-10-16 09:29:32.251393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:72480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.934 [2024-10-16 09:29:32.251406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.934 [2024-10-16 09:29:32.251419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:72488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.934 [2024-10-16 09:29:32.251431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.934 [2024-10-16 09:29:32.251445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:72496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.934 [2024-10-16 09:29:32.251457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.934 [2024-10-16 09:29:32.251471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:72504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.934 [2024-10-16 09:29:32.251483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.934 [2024-10-16 09:29:32.251496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:72512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.934 [2024-10-16 09:29:32.251509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.934 [2024-10-16 09:29:32.251522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:72520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.934 [2024-10-16 09:29:32.251534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.934 [2024-10-16 09:29:32.251558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:72528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.934 [2024-10-16 09:29:32.251570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.934 [2024-10-16 09:29:32.251584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:72536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.934 [2024-10-16 09:29:32.251603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.934 [2024-10-16 09:29:32.251617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:72544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.934 [2024-10-16 09:29:32.251634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.934 [2024-10-16 09:29:32.251648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.934 [2024-10-16 09:29:32.251660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.934 [2024-10-16 09:29:32.251674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:72560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.934 [2024-10-16 09:29:32.251686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.934 [2024-10-16 09:29:32.251700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:72568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.934 [2024-10-16 09:29:32.251712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.934 [2024-10-16 09:29:32.251726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:72576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.934 [2024-10-16 09:29:32.251739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.934 [2024-10-16 09:29:32.251752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:72584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.934 [2024-10-16 09:29:32.251765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.934 [2024-10-16 09:29:32.251779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:72592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.934 [2024-10-16 09:29:32.251791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.934 [2024-10-16 09:29:32.251805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:72600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.934 [2024-10-16 09:29:32.251817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.934 [2024-10-16 09:29:32.251831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:72608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.934 [2024-10-16 09:29:32.251843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.934 [2024-10-16 09:29:32.251857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:72616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.934 [2024-10-16 09:29:32.251869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.934 [2024-10-16 09:29:32.251882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:72624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.934 [2024-10-16 09:29:32.251895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.934 [2024-10-16 09:29:32.251909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:72632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.934 [2024-10-16 09:29:32.251921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.934 [2024-10-16 09:29:32.251941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:72640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.934 [2024-10-16 09:29:32.251954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.934 [2024-10-16 09:29:32.251967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:72648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.934 [2024-10-16 09:29:32.251980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.934 [2024-10-16 09:29:32.251994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:72656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.934 [2024-10-16 09:29:32.252006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.934 [2024-10-16 09:29:32.252019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:72664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.934 [2024-10-16 09:29:32.252031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.934 [2024-10-16 09:29:32.252045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:72672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.934 [2024-10-16 09:29:32.252058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.934 [2024-10-16 09:29:32.252072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:72680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.934 [2024-10-16 09:29:32.252084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.934 [2024-10-16 09:29:32.252098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:72688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.934 [2024-10-16 09:29:32.252110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.934 [2024-10-16 09:29:32.252124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.934 [2024-10-16 09:29:32.252141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.934 [2024-10-16 09:29:32.252154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:72704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.934 [2024-10-16 09:29:32.252167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.934 [2024-10-16 09:29:32.252180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:72712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.934 [2024-10-16 09:29:32.252192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.934 [2024-10-16 09:29:32.252206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:72720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.934 [2024-10-16 09:29:32.252218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.934 [2024-10-16 09:29:32.252231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:72728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.934 [2024-10-16 09:29:32.252244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.934 [2024-10-16 09:29:32.252257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:72736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.934 [2024-10-16 09:29:32.252278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.934 [2024-10-16 09:29:32.252293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:72744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.934 [2024-10-16 09:29:32.252305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.934 [2024-10-16 09:29:32.252319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:72752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.934 [2024-10-16 09:29:32.252332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.934 [2024-10-16 09:29:32.252346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:72760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.934 [2024-10-16 09:29:32.252358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.934 [2024-10-16 09:29:32.252372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:72768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.934 [2024-10-16 09:29:32.252384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.934 [2024-10-16 09:29:32.252398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:72776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.934 [2024-10-16 09:29:32.252410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.934 [2024-10-16 09:29:32.252423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:72784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.934 [2024-10-16 09:29:32.252435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.934 [2024-10-16 09:29:32.252449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:72792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.934 [2024-10-16 09:29:32.252461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.934 [2024-10-16 09:29:32.252475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:72800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.934 [2024-10-16 09:29:32.252487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.935 [2024-10-16 09:29:32.252501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:72808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.935 [2024-10-16 09:29:32.252513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.935 [2024-10-16 09:29:32.252527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:72816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.935 [2024-10-16 09:29:32.252546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.935 [2024-10-16 09:29:32.252578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:72824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.935 [2024-10-16 09:29:32.252596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.935 [2024-10-16 09:29:32.252610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.935 [2024-10-16 09:29:32.252623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.935 [2024-10-16 09:29:32.252637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:72840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.935 [2024-10-16 09:29:32.252656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.935 [2024-10-16 09:29:32.252672] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1204010 is same with the state(6) to be set 00:15:22.935 [2024-10-16 09:29:32.252687] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:22.935 [2024-10-16 09:29:32.252697] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:22.935 [2024-10-16 09:29:32.252707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72848 len:8 PRP1 0x0 PRP2 0x0 00:15:22.935 [2024-10-16 09:29:32.252719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.935 [2024-10-16 09:29:32.252733] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:22.935 [2024-10-16 09:29:32.252742] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:22.935 [2024-10-16 09:29:32.252751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72872 len:8 PRP1 0x0 PRP2 0x0 00:15:22.935 [2024-10-16 09:29:32.252763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.935 [2024-10-16 09:29:32.252776] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:22.935 [2024-10-16 09:29:32.252785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:22.935 [2024-10-16 09:29:32.252794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72880 len:8 PRP1 0x0 PRP2 0x0 00:15:22.935 [2024-10-16 09:29:32.252823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.935 [2024-10-16 09:29:32.252837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:22.935 [2024-10-16 09:29:32.252846] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:22.935 [2024-10-16 09:29:32.252856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72888 len:8 PRP1 0x0 PRP2 0x0 00:15:22.935 [2024-10-16 09:29:32.252869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.935 [2024-10-16 09:29:32.252882] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:22.935 [2024-10-16 09:29:32.252892] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:22.935 [2024-10-16 09:29:32.252901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72896 len:8 PRP1 0x0 PRP2 0x0 00:15:22.935 [2024-10-16 09:29:32.252913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.935 [2024-10-16 09:29:32.252926] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:22.935 [2024-10-16 09:29:32.252935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:22.935 [2024-10-16 09:29:32.252960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72904 len:8 PRP1 0x0 PRP2 0x0 00:15:22.935 [2024-10-16 09:29:32.252972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.935 [2024-10-16 09:29:32.252984] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:22.935 [2024-10-16 09:29:32.252993] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:22.935 [2024-10-16 09:29:32.253007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72912 len:8 PRP1 0x0 PRP2 0x0 00:15:22.935 [2024-10-16 09:29:32.253019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.935 [2024-10-16 09:29:32.253038] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:22.935 [2024-10-16 09:29:32.253048] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:22.935 [2024-10-16 09:29:32.253057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72920 len:8 PRP1 0x0 PRP2 0x0 00:15:22.935 [2024-10-16 09:29:32.253069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.935 [2024-10-16 09:29:32.253082] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:22.935 [2024-10-16 09:29:32.253091] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:22.935 [2024-10-16 09:29:32.253100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72928 len:8 PRP1 0x0 PRP2 0x0 00:15:22.935 [2024-10-16 09:29:32.253112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.935 [2024-10-16 09:29:32.253125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:22.935 [2024-10-16 09:29:32.253134] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:22.935 [2024-10-16 09:29:32.253143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72936 len:8 PRP1 0x0 PRP2 0x0 00:15:22.935 [2024-10-16 09:29:32.253155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.935 [2024-10-16 09:29:32.253168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:22.935 [2024-10-16 09:29:32.253177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:22.935 [2024-10-16 09:29:32.253186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72944 len:8 PRP1 0x0 PRP2 0x0 00:15:22.935 [2024-10-16 09:29:32.253198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.935 [2024-10-16 09:29:32.253239] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:22.935 [2024-10-16 09:29:32.253250] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:22.935 [2024-10-16 09:29:32.253260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72952 len:8 PRP1 0x0 PRP2 0x0 00:15:22.935 [2024-10-16 09:29:32.253272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.935 [2024-10-16 09:29:32.253286] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:22.935 [2024-10-16 09:29:32.253296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:22.935 [2024-10-16 09:29:32.253306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72960 len:8 PRP1 0x0 PRP2 0x0 00:15:22.935 [2024-10-16 09:29:32.253319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.935 [2024-10-16 09:29:32.253332] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:22.935 [2024-10-16 09:29:32.253342] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:22.935 [2024-10-16 09:29:32.253352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72968 len:8 PRP1 0x0 PRP2 0x0 00:15:22.935 [2024-10-16 09:29:32.253364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.935 [2024-10-16 09:29:32.253377] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:22.935 [2024-10-16 09:29:32.253387] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:22.935 [2024-10-16 09:29:32.253401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72976 len:8 PRP1 0x0 PRP2 0x0 00:15:22.935 [2024-10-16 09:29:32.253421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.935 [2024-10-16 09:29:32.253435] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:22.935 [2024-10-16 09:29:32.253445] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:22.935 [2024-10-16 09:29:32.253455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72984 len:8 PRP1 0x0 PRP2 0x0 00:15:22.935 [2024-10-16 09:29:32.253472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.935 [2024-10-16 09:29:32.254454] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1204010 was disconnected and freed. reset controller. 00:15:22.935 [2024-10-16 09:29:32.254480] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:15:22.935 [2024-10-16 09:29:32.254531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:22.935 [2024-10-16 09:29:32.254564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.935 [2024-10-16 09:29:32.254582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:22.935 [2024-10-16 09:29:32.254595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.935 [2024-10-16 09:29:32.254608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:22.935 [2024-10-16 09:29:32.254621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.935 [2024-10-16 09:29:32.254634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:22.935 [2024-10-16 09:29:32.254646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.935 [2024-10-16 09:29:32.254659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:22.935 [2024-10-16 09:29:32.254711] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11952e0 (9): Bad file descriptor 00:15:22.935 [2024-10-16 09:29:32.258286] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:22.935 [2024-10-16 09:29:32.291067] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:22.935 8515.00 IOPS, 33.26 MiB/s [2024-10-16T09:29:47.339Z] 9084.33 IOPS, 35.49 MiB/s [2024-10-16T09:29:47.339Z] 9366.75 IOPS, 36.59 MiB/s [2024-10-16T09:29:47.339Z] [2024-10-16 09:29:35.888478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:111304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.935 [2024-10-16 09:29:35.888568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.935 [2024-10-16 09:29:35.888597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:111312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.935 [2024-10-16 09:29:35.888613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.935 [2024-10-16 09:29:35.888628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:111320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.936 [2024-10-16 09:29:35.888657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.936 [2024-10-16 09:29:35.888672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:111328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.936 [2024-10-16 09:29:35.888703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.936 [2024-10-16 09:29:35.888720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:111336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.936 [2024-10-16 09:29:35.888734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.936 [2024-10-16 09:29:35.888748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:111344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.936 [2024-10-16 09:29:35.888761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.936 [2024-10-16 09:29:35.888775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:111352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.936 [2024-10-16 09:29:35.888788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.936 [2024-10-16 09:29:35.888802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:111360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.936 [2024-10-16 09:29:35.888814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.936 [2024-10-16 09:29:35.888828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:111368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.936 [2024-10-16 09:29:35.888841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.936 [2024-10-16 09:29:35.888855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:111376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.936 [2024-10-16 09:29:35.888867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.936 [2024-10-16 09:29:35.888881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:111384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.936 [2024-10-16 09:29:35.888894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.936 [2024-10-16 09:29:35.888908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:111392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.936 [2024-10-16 09:29:35.888921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.936 [2024-10-16 09:29:35.888935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:111400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.936 [2024-10-16 09:29:35.888947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.936 [2024-10-16 09:29:35.888961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:111408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.936 [2024-10-16 09:29:35.888974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.936 [2024-10-16 09:29:35.888988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:111416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.936 [2024-10-16 09:29:35.889000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.936 [2024-10-16 09:29:35.889014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:111424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.936 [2024-10-16 09:29:35.889026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.936 [2024-10-16 09:29:35.889041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:110792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.936 [2024-10-16 09:29:35.889065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.936 [2024-10-16 09:29:35.889097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:110800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.936 [2024-10-16 09:29:35.889110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.936 [2024-10-16 09:29:35.889123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:110808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.936 [2024-10-16 09:29:35.889136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.936 [2024-10-16 09:29:35.889149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:110816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.936 [2024-10-16 09:29:35.889162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.936 [2024-10-16 09:29:35.889176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:110824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.936 [2024-10-16 09:29:35.889188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.936 [2024-10-16 09:29:35.889201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:110832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.936 [2024-10-16 09:29:35.889240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.936 [2024-10-16 09:29:35.889256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:110840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.936 [2024-10-16 09:29:35.889269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.936 [2024-10-16 09:29:35.889283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:110848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.936 [2024-10-16 09:29:35.889296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.936 [2024-10-16 09:29:35.889327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:110856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.936 [2024-10-16 09:29:35.889341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.936 [2024-10-16 09:29:35.889356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:110864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.936 [2024-10-16 09:29:35.889369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.936 [2024-10-16 09:29:35.889384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:110872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.936 [2024-10-16 09:29:35.889398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.936 [2024-10-16 09:29:35.889413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:110880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.936 [2024-10-16 09:29:35.889426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.936 [2024-10-16 09:29:35.889441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:110888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.936 [2024-10-16 09:29:35.889454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.936 [2024-10-16 09:29:35.889477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:110896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.936 [2024-10-16 09:29:35.889491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.936 [2024-10-16 09:29:35.889506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:110904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.936 [2024-10-16 09:29:35.889521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.936 [2024-10-16 09:29:35.889551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:110912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.936 [2024-10-16 09:29:35.889564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.936 [2024-10-16 09:29:35.889578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:111432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.936 [2024-10-16 09:29:35.889604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.936 [2024-10-16 09:29:35.889620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.936 [2024-10-16 09:29:35.889648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.936 [2024-10-16 09:29:35.889663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:111448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.936 [2024-10-16 09:29:35.889676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.936 [2024-10-16 09:29:35.889697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:111456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.936 [2024-10-16 09:29:35.889710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.936 [2024-10-16 09:29:35.889724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:111464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.936 [2024-10-16 09:29:35.889737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.936 [2024-10-16 09:29:35.889751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:111472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.937 [2024-10-16 09:29:35.889764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.937 [2024-10-16 09:29:35.889778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:111480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.937 [2024-10-16 09:29:35.889790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.937 [2024-10-16 09:29:35.889804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:111488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.937 [2024-10-16 09:29:35.889817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.937 [2024-10-16 09:29:35.889831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:111496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.937 [2024-10-16 09:29:35.889844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.937 [2024-10-16 09:29:35.889858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:111504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.937 [2024-10-16 09:29:35.889877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.937 [2024-10-16 09:29:35.889892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:111512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.937 [2024-10-16 09:29:35.889904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.937 [2024-10-16 09:29:35.889919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:111520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.937 [2024-10-16 09:29:35.889931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.937 [2024-10-16 09:29:35.889945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:111528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.937 [2024-10-16 09:29:35.889958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.937 [2024-10-16 09:29:35.889972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:111536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.937 [2024-10-16 09:29:35.889985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.937 [2024-10-16 09:29:35.889999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:111544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.937 [2024-10-16 09:29:35.890013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.937 [2024-10-16 09:29:35.890026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:111552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.937 [2024-10-16 09:29:35.890039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.937 [2024-10-16 09:29:35.890053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:110920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.937 [2024-10-16 09:29:35.890066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.937 [2024-10-16 09:29:35.890081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:110928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.937 [2024-10-16 09:29:35.890093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.937 [2024-10-16 09:29:35.890107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:110936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.937 [2024-10-16 09:29:35.890120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.937 [2024-10-16 09:29:35.890134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:110944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.937 [2024-10-16 09:29:35.890147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.937 [2024-10-16 09:29:35.890161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:110952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.937 [2024-10-16 09:29:35.890173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.937 [2024-10-16 09:29:35.890187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:110960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.937 [2024-10-16 09:29:35.890200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.937 [2024-10-16 09:29:35.890220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:110968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.937 [2024-10-16 09:29:35.890234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.937 [2024-10-16 09:29:35.890248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:110976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.937 [2024-10-16 09:29:35.890261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.937 [2024-10-16 09:29:35.890275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:110984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.937 [2024-10-16 09:29:35.890287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.937 [2024-10-16 09:29:35.890301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:110992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.937 [2024-10-16 09:29:35.890314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.937 [2024-10-16 09:29:35.890328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:111000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.937 [2024-10-16 09:29:35.890340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.937 [2024-10-16 09:29:35.890354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:111008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.937 [2024-10-16 09:29:35.890367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.937 [2024-10-16 09:29:35.890381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:111016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.937 [2024-10-16 09:29:35.890393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.937 [2024-10-16 09:29:35.890407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:111024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.937 [2024-10-16 09:29:35.890420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.937 [2024-10-16 09:29:35.890434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:111032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.937 [2024-10-16 09:29:35.890447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.937 [2024-10-16 09:29:35.890461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:111040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.937 [2024-10-16 09:29:35.890474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.937 [2024-10-16 09:29:35.890488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:111560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.937 [2024-10-16 09:29:35.890501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.937 [2024-10-16 09:29:35.890516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:111568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.937 [2024-10-16 09:29:35.890528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.937 [2024-10-16 09:29:35.890542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:111576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.937 [2024-10-16 09:29:35.890571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.937 [2024-10-16 09:29:35.890587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:111584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.937 [2024-10-16 09:29:35.890600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.937 [2024-10-16 09:29:35.890615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:111592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.937 [2024-10-16 09:29:35.890628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.937 [2024-10-16 09:29:35.890642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.937 [2024-10-16 09:29:35.890655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.937 [2024-10-16 09:29:35.890668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:111608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.937 [2024-10-16 09:29:35.890681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.937 [2024-10-16 09:29:35.890695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:111616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.937 [2024-10-16 09:29:35.890708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.937 [2024-10-16 09:29:35.890722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:111624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.937 [2024-10-16 09:29:35.890734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.937 [2024-10-16 09:29:35.890748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:111632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.937 [2024-10-16 09:29:35.890761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.937 [2024-10-16 09:29:35.890775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:111640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.937 [2024-10-16 09:29:35.890788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.937 [2024-10-16 09:29:35.890801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:111648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.937 [2024-10-16 09:29:35.890814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.937 [2024-10-16 09:29:35.890828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:111656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.937 [2024-10-16 09:29:35.890841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.937 [2024-10-16 09:29:35.890855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:111664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.937 [2024-10-16 09:29:35.890868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.937 [2024-10-16 09:29:35.890882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:111672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.937 [2024-10-16 09:29:35.890895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.937 [2024-10-16 09:29:35.890915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:111680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.937 [2024-10-16 09:29:35.890929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.938 [2024-10-16 09:29:35.890944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:111048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.938 [2024-10-16 09:29:35.890957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.938 [2024-10-16 09:29:35.890971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:111056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.938 [2024-10-16 09:29:35.890984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.938 [2024-10-16 09:29:35.890999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:111064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.938 [2024-10-16 09:29:35.891011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.938 [2024-10-16 09:29:35.891026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:111072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.938 [2024-10-16 09:29:35.891038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.938 [2024-10-16 09:29:35.891052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:111080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.938 [2024-10-16 09:29:35.891065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.938 [2024-10-16 09:29:35.891079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:111088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.938 [2024-10-16 09:29:35.891091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.938 [2024-10-16 09:29:35.891105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:111096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.938 [2024-10-16 09:29:35.891118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.938 [2024-10-16 09:29:35.891132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:111104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.938 [2024-10-16 09:29:35.891144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.938 [2024-10-16 09:29:35.891158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:111112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.938 [2024-10-16 09:29:35.891171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.938 [2024-10-16 09:29:35.891185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:111120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.938 [2024-10-16 09:29:35.891198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.938 [2024-10-16 09:29:35.891212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:111128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.938 [2024-10-16 09:29:35.891224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.938 [2024-10-16 09:29:35.891238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:111136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.938 [2024-10-16 09:29:35.891257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.938 [2024-10-16 09:29:35.891272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:111144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.938 [2024-10-16 09:29:35.891284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.938 [2024-10-16 09:29:35.891299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:111152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.938 [2024-10-16 09:29:35.891317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.938 [2024-10-16 09:29:35.891332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:111160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.938 [2024-10-16 09:29:35.891344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.938 [2024-10-16 09:29:35.891358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:111168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.938 [2024-10-16 09:29:35.891371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.938 [2024-10-16 09:29:35.891385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:111688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.938 [2024-10-16 09:29:35.891398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.938 [2024-10-16 09:29:35.891412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:111696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.938 [2024-10-16 09:29:35.891424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.938 [2024-10-16 09:29:35.891438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.938 [2024-10-16 09:29:35.891451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.938 [2024-10-16 09:29:35.891465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:111712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.938 [2024-10-16 09:29:35.891478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.938 [2024-10-16 09:29:35.891491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:111720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.938 [2024-10-16 09:29:35.891504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.938 [2024-10-16 09:29:35.891518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:111728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.938 [2024-10-16 09:29:35.891531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.938 [2024-10-16 09:29:35.891570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:111736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.938 [2024-10-16 09:29:35.891584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.938 [2024-10-16 09:29:35.891599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:111744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.938 [2024-10-16 09:29:35.891612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.938 [2024-10-16 09:29:35.891627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:111176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.938 [2024-10-16 09:29:35.891646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.938 [2024-10-16 09:29:35.891661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:111184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.938 [2024-10-16 09:29:35.891675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.938 [2024-10-16 09:29:35.891690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:111192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.938 [2024-10-16 09:29:35.891703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.938 [2024-10-16 09:29:35.891718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:111200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.938 [2024-10-16 09:29:35.891730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.938 [2024-10-16 09:29:35.891745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:111208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.938 [2024-10-16 09:29:35.891758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.938 [2024-10-16 09:29:35.891772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:111216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.938 [2024-10-16 09:29:35.891791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.938 [2024-10-16 09:29:35.891806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:111224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.938 [2024-10-16 09:29:35.891818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.938 [2024-10-16 09:29:35.891833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:111232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.938 [2024-10-16 09:29:35.891846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.938 [2024-10-16 09:29:35.891860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:111240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.938 [2024-10-16 09:29:35.891873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.938 [2024-10-16 09:29:35.891888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:111248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.938 [2024-10-16 09:29:35.891901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.938 [2024-10-16 09:29:35.891915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:111256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.938 [2024-10-16 09:29:35.891928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.938 [2024-10-16 09:29:35.891942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:111264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.938 [2024-10-16 09:29:35.891970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.938 [2024-10-16 09:29:35.891984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:111272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.938 [2024-10-16 09:29:35.891997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.938 [2024-10-16 09:29:35.892017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:111280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.938 [2024-10-16 09:29:35.892030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.938 [2024-10-16 09:29:35.892050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:111288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.938 [2024-10-16 09:29:35.892064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.938 [2024-10-16 09:29:35.892077] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1204010 is same with the state(6) to be set 00:15:22.938 [2024-10-16 09:29:35.892093] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:22.938 [2024-10-16 09:29:35.892103] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:22.938 [2024-10-16 09:29:35.892112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111296 len:8 PRP1 0x0 PRP2 0x0 00:15:22.938 [2024-10-16 09:29:35.892124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.938 [2024-10-16 09:29:35.892138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:22.938 [2024-10-16 09:29:35.892147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:22.939 [2024-10-16 09:29:35.892156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111752 len:8 PRP1 0x0 PRP2 0x0 00:15:22.939 [2024-10-16 09:29:35.892168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.939 [2024-10-16 09:29:35.892181] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:22.939 [2024-10-16 09:29:35.892190] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:22.939 [2024-10-16 09:29:35.892199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111760 len:8 PRP1 0x0 PRP2 0x0 00:15:22.939 [2024-10-16 09:29:35.892211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.939 [2024-10-16 09:29:35.892228] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:22.939 [2024-10-16 09:29:35.892238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:22.939 [2024-10-16 09:29:35.892247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111768 len:8 PRP1 0x0 PRP2 0x0 00:15:22.939 [2024-10-16 09:29:35.892259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.939 [2024-10-16 09:29:35.892271] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:22.939 [2024-10-16 09:29:35.892280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:22.939 [2024-10-16 09:29:35.892289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111776 len:8 PRP1 0x0 PRP2 0x0 00:15:22.939 [2024-10-16 09:29:35.892301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.939 [2024-10-16 09:29:35.892314] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:22.939 [2024-10-16 09:29:35.892323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:22.939 [2024-10-16 09:29:35.892332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111784 len:8 PRP1 0x0 PRP2 0x0 00:15:22.939 [2024-10-16 09:29:35.892344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.939 [2024-10-16 09:29:35.892362] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:22.939 [2024-10-16 09:29:35.892372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:22.939 [2024-10-16 09:29:35.892381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111792 len:8 PRP1 0x0 PRP2 0x0 00:15:22.939 [2024-10-16 09:29:35.892393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.939 [2024-10-16 09:29:35.892405] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:22.939 [2024-10-16 09:29:35.892420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:22.939 [2024-10-16 09:29:35.892430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111800 len:8 PRP1 0x0 PRP2 0x0 00:15:22.939 [2024-10-16 09:29:35.892442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.939 [2024-10-16 09:29:35.892455] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:22.939 [2024-10-16 09:29:35.892463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:22.939 [2024-10-16 09:29:35.892472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111808 len:8 PRP1 0x0 PRP2 0x0 00:15:22.939 [2024-10-16 09:29:35.892484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.939 [2024-10-16 09:29:35.893456] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1204010 was disconnected and freed. reset controller. 00:15:22.939 [2024-10-16 09:29:35.893484] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:15:22.939 [2024-10-16 09:29:35.893563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:22.939 [2024-10-16 09:29:35.893598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.939 [2024-10-16 09:29:35.893629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:22.939 [2024-10-16 09:29:35.893642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.939 [2024-10-16 09:29:35.893655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:22.939 [2024-10-16 09:29:35.893668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.939 [2024-10-16 09:29:35.893687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:22.939 [2024-10-16 09:29:35.893699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.939 [2024-10-16 09:29:35.893712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:22.939 [2024-10-16 09:29:35.893758] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11952e0 (9): Bad file descriptor 00:15:22.939 [2024-10-16 09:29:35.897264] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:22.939 [2024-10-16 09:29:35.933417] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:22.939 9408.20 IOPS, 36.75 MiB/s [2024-10-16T09:29:47.343Z] 9556.17 IOPS, 37.33 MiB/s [2024-10-16T09:29:47.343Z] 9649.29 IOPS, 37.69 MiB/s [2024-10-16T09:29:47.343Z] 9718.12 IOPS, 37.96 MiB/s [2024-10-16T09:29:47.343Z] 9769.89 IOPS, 38.16 MiB/s [2024-10-16T09:29:47.343Z] [2024-10-16 09:29:40.453725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.939 [2024-10-16 09:29:40.453790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.939 [2024-10-16 09:29:40.453858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.939 [2024-10-16 09:29:40.453875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.939 [2024-10-16 09:29:40.453891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.939 [2024-10-16 09:29:40.453905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.939 [2024-10-16 09:29:40.453920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.939 [2024-10-16 09:29:40.453934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.939 [2024-10-16 09:29:40.453949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.939 [2024-10-16 09:29:40.453962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.939 [2024-10-16 09:29:40.453977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.939 [2024-10-16 09:29:40.453991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.939 [2024-10-16 09:29:40.454006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.939 [2024-10-16 09:29:40.454019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.939 [2024-10-16 09:29:40.454034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.939 [2024-10-16 09:29:40.454048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.939 [2024-10-16 09:29:40.454063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.939 [2024-10-16 09:29:40.454076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.939 [2024-10-16 09:29:40.454091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.939 [2024-10-16 09:29:40.454104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.939 [2024-10-16 09:29:40.454119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.939 [2024-10-16 09:29:40.454133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.939 [2024-10-16 09:29:40.454147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.939 [2024-10-16 09:29:40.454160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.939 [2024-10-16 09:29:40.454175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.939 [2024-10-16 09:29:40.454189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.939 [2024-10-16 09:29:40.454204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.939 [2024-10-16 09:29:40.454226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.939 [2024-10-16 09:29:40.454243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.939 [2024-10-16 09:29:40.454256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.939 [2024-10-16 09:29:40.454271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.939 [2024-10-16 09:29:40.454285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.939 [2024-10-16 09:29:40.454301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.939 [2024-10-16 09:29:40.454314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.939 [2024-10-16 09:29:40.454332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.939 [2024-10-16 09:29:40.454347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.939 [2024-10-16 09:29:40.454362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.939 [2024-10-16 09:29:40.454376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.939 [2024-10-16 09:29:40.454394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:96080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.939 [2024-10-16 09:29:40.454408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.939 [2024-10-16 09:29:40.454440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.939 [2024-10-16 09:29:40.454454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.939 [2024-10-16 09:29:40.454470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.940 [2024-10-16 09:29:40.454484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.940 [2024-10-16 09:29:40.454500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:96104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.940 [2024-10-16 09:29:40.454514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.940 [2024-10-16 09:29:40.454530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:96112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.940 [2024-10-16 09:29:40.454544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.940 [2024-10-16 09:29:40.454559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.940 [2024-10-16 09:29:40.454587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.940 [2024-10-16 09:29:40.454606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.940 [2024-10-16 09:29:40.454620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.940 [2024-10-16 09:29:40.454645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.940 [2024-10-16 09:29:40.454660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.940 [2024-10-16 09:29:40.454676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.940 [2024-10-16 09:29:40.454690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.940 [2024-10-16 09:29:40.454706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.940 [2024-10-16 09:29:40.454720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.940 [2024-10-16 09:29:40.454736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.940 [2024-10-16 09:29:40.454750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.940 [2024-10-16 09:29:40.454766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.940 [2024-10-16 09:29:40.454780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.940 [2024-10-16 09:29:40.454796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.940 [2024-10-16 09:29:40.454811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.940 [2024-10-16 09:29:40.454827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.940 [2024-10-16 09:29:40.454842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.940 [2024-10-16 09:29:40.454858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.940 [2024-10-16 09:29:40.454872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.940 [2024-10-16 09:29:40.454888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.940 [2024-10-16 09:29:40.454902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.940 [2024-10-16 09:29:40.454934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.940 [2024-10-16 09:29:40.454948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.940 [2024-10-16 09:29:40.454963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.940 [2024-10-16 09:29:40.454977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.940 [2024-10-16 09:29:40.454992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.940 [2024-10-16 09:29:40.455006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.940 [2024-10-16 09:29:40.455021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.940 [2024-10-16 09:29:40.455035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.940 [2024-10-16 09:29:40.455057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.940 [2024-10-16 09:29:40.455072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.940 [2024-10-16 09:29:40.455088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.940 [2024-10-16 09:29:40.455102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.940 [2024-10-16 09:29:40.455118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.940 [2024-10-16 09:29:40.455132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.940 [2024-10-16 09:29:40.455147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.940 [2024-10-16 09:29:40.455161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.940 [2024-10-16 09:29:40.455177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.940 [2024-10-16 09:29:40.455190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.940 [2024-10-16 09:29:40.455206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.940 [2024-10-16 09:29:40.455219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.940 [2024-10-16 09:29:40.455234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.940 [2024-10-16 09:29:40.455248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.940 [2024-10-16 09:29:40.455263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.940 [2024-10-16 09:29:40.455277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.940 [2024-10-16 09:29:40.455292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.940 [2024-10-16 09:29:40.455306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.940 [2024-10-16 09:29:40.455322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:96184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.940 [2024-10-16 09:29:40.455336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.940 [2024-10-16 09:29:40.455352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:96192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.940 [2024-10-16 09:29:40.455366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.940 [2024-10-16 09:29:40.455382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:96200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.940 [2024-10-16 09:29:40.455396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.940 [2024-10-16 09:29:40.455411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:96208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.940 [2024-10-16 09:29:40.455432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.940 [2024-10-16 09:29:40.455448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.940 [2024-10-16 09:29:40.455462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.940 [2024-10-16 09:29:40.455477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.940 [2024-10-16 09:29:40.455491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.940 [2024-10-16 09:29:40.455506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.940 [2024-10-16 09:29:40.455519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.940 [2024-10-16 09:29:40.455535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.940 [2024-10-16 09:29:40.455548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.940 [2024-10-16 09:29:40.455575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:96248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.940 [2024-10-16 09:29:40.455606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.940 [2024-10-16 09:29:40.455623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:96256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.940 [2024-10-16 09:29:40.455637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.940 [2024-10-16 09:29:40.455653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:96264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.941 [2024-10-16 09:29:40.455667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.941 [2024-10-16 09:29:40.455682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.941 [2024-10-16 09:29:40.455697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.941 [2024-10-16 09:29:40.455712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:96280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.941 [2024-10-16 09:29:40.455727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.941 [2024-10-16 09:29:40.455742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:96288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.941 [2024-10-16 09:29:40.455756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.941 [2024-10-16 09:29:40.455772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.941 [2024-10-16 09:29:40.455787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.941 [2024-10-16 09:29:40.455802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:96304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.941 [2024-10-16 09:29:40.455817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.941 [2024-10-16 09:29:40.455841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.941 [2024-10-16 09:29:40.455856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.941 [2024-10-16 09:29:40.455872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.941 [2024-10-16 09:29:40.455886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.941 [2024-10-16 09:29:40.455902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.941 [2024-10-16 09:29:40.455916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.941 [2024-10-16 09:29:40.455932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.941 [2024-10-16 09:29:40.455963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.941 [2024-10-16 09:29:40.455978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.941 [2024-10-16 09:29:40.455992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.941 [2024-10-16 09:29:40.456007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.941 [2024-10-16 09:29:40.456020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.941 [2024-10-16 09:29:40.456036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.941 [2024-10-16 09:29:40.456049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.941 [2024-10-16 09:29:40.456065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.941 [2024-10-16 09:29:40.456078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.941 [2024-10-16 09:29:40.456093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.941 [2024-10-16 09:29:40.456107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.941 [2024-10-16 09:29:40.456122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.941 [2024-10-16 09:29:40.456136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.941 [2024-10-16 09:29:40.456151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.941 [2024-10-16 09:29:40.456164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.941 [2024-10-16 09:29:40.456179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.941 [2024-10-16 09:29:40.456194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.941 [2024-10-16 09:29:40.456209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.941 [2024-10-16 09:29:40.456228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.941 [2024-10-16 09:29:40.456245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.941 [2024-10-16 09:29:40.456259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.941 [2024-10-16 09:29:40.456275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.941 [2024-10-16 09:29:40.456289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.941 [2024-10-16 09:29:40.456304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:96368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.941 [2024-10-16 09:29:40.456317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.941 [2024-10-16 09:29:40.456333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.941 [2024-10-16 09:29:40.456346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.941 [2024-10-16 09:29:40.456362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.941 [2024-10-16 09:29:40.456376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.941 [2024-10-16 09:29:40.456392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.941 [2024-10-16 09:29:40.456405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.941 [2024-10-16 09:29:40.456421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.941 [2024-10-16 09:29:40.456434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.941 [2024-10-16 09:29:40.456450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.941 [2024-10-16 09:29:40.456463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.941 [2024-10-16 09:29:40.456479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.941 [2024-10-16 09:29:40.456492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.941 [2024-10-16 09:29:40.456507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.941 [2024-10-16 09:29:40.456521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.941 [2024-10-16 09:29:40.456536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.941 [2024-10-16 09:29:40.456550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.941 [2024-10-16 09:29:40.456565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.941 [2024-10-16 09:29:40.456588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.941 [2024-10-16 09:29:40.456604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.941 [2024-10-16 09:29:40.456624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.941 [2024-10-16 09:29:40.456640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.941 [2024-10-16 09:29:40.456654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.941 [2024-10-16 09:29:40.456670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.941 [2024-10-16 09:29:40.456684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.941 [2024-10-16 09:29:40.456699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.941 [2024-10-16 09:29:40.456713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.941 [2024-10-16 09:29:40.456728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.941 [2024-10-16 09:29:40.456749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.941 [2024-10-16 09:29:40.456781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.941 [2024-10-16 09:29:40.456795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.941 [2024-10-16 09:29:40.456811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:22.941 [2024-10-16 09:29:40.456825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.941 [2024-10-16 09:29:40.456841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.941 [2024-10-16 09:29:40.456854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.941 [2024-10-16 09:29:40.456870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.941 [2024-10-16 09:29:40.456884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.941 [2024-10-16 09:29:40.456900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:96392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.941 [2024-10-16 09:29:40.456914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.941 [2024-10-16 09:29:40.456930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.941 [2024-10-16 09:29:40.456944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.941 [2024-10-16 09:29:40.456960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.942 [2024-10-16 09:29:40.456974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.942 [2024-10-16 09:29:40.456990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.942 [2024-10-16 09:29:40.457004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.942 [2024-10-16 09:29:40.457026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.942 [2024-10-16 09:29:40.457041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.942 [2024-10-16 09:29:40.457057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:96432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.942 [2024-10-16 09:29:40.457071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.942 [2024-10-16 09:29:40.457087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.942 [2024-10-16 09:29:40.457101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.942 [2024-10-16 09:29:40.457117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:96448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.942 [2024-10-16 09:29:40.457131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.942 [2024-10-16 09:29:40.457147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:96456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.942 [2024-10-16 09:29:40.457169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.942 [2024-10-16 09:29:40.457185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.942 [2024-10-16 09:29:40.457199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.942 [2024-10-16 09:29:40.457225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:96472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.942 [2024-10-16 09:29:40.457241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.942 [2024-10-16 09:29:40.457257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.942 [2024-10-16 09:29:40.457276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.942 [2024-10-16 09:29:40.457292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:22.942 [2024-10-16 09:29:40.457306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.942 [2024-10-16 09:29:40.457321] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1208030 is same with the state(6) to be set 00:15:22.942 [2024-10-16 09:29:40.457338] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:22.942 [2024-10-16 09:29:40.457349] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:22.942 [2024-10-16 09:29:40.457360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96496 len:8 PRP1 0x0 PRP2 0x0 00:15:22.942 [2024-10-16 09:29:40.457374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.942 [2024-10-16 09:29:40.457390] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:22.942 [2024-10-16 09:29:40.457400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:22.942 [2024-10-16 09:29:40.457411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96952 len:8 PRP1 0x0 PRP2 0x0 00:15:22.942 [2024-10-16 09:29:40.457432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.942 [2024-10-16 09:29:40.457447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:22.942 [2024-10-16 09:29:40.457458] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:22.942 [2024-10-16 09:29:40.457468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96960 len:8 PRP1 0x0 PRP2 0x0 00:15:22.942 [2024-10-16 09:29:40.457482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.942 [2024-10-16 09:29:40.457496] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:22.942 [2024-10-16 09:29:40.457506] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:22.942 [2024-10-16 09:29:40.457516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96968 len:8 PRP1 0x0 PRP2 0x0 00:15:22.942 [2024-10-16 09:29:40.457532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.942 [2024-10-16 09:29:40.457559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:22.942 [2024-10-16 09:29:40.457571] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:22.942 [2024-10-16 09:29:40.457581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96976 len:8 PRP1 0x0 PRP2 0x0 00:15:22.942 [2024-10-16 09:29:40.457595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.942 [2024-10-16 09:29:40.457609] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:22.942 [2024-10-16 09:29:40.457619] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:22.942 [2024-10-16 09:29:40.457635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96984 len:8 PRP1 0x0 PRP2 0x0 00:15:22.942 [2024-10-16 09:29:40.457649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.942 [2024-10-16 09:29:40.457663] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:22.942 [2024-10-16 09:29:40.457673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:22.942 [2024-10-16 09:29:40.457684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96992 len:8 PRP1 0x0 PRP2 0x0 00:15:22.942 [2024-10-16 09:29:40.457698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.942 [2024-10-16 09:29:40.457717] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:22.942 [2024-10-16 09:29:40.457727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:22.942 [2024-10-16 09:29:40.457738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97000 len:8 PRP1 0x0 PRP2 0x0 00:15:22.942 [2024-10-16 09:29:40.457751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.942 [2024-10-16 09:29:40.457765] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:22.942 [2024-10-16 09:29:40.457776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:22.942 [2024-10-16 09:29:40.457786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97008 len:8 PRP1 0x0 PRP2 0x0 00:15:22.942 [2024-10-16 09:29:40.457800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.942 [2024-10-16 09:29:40.457814] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:22.942 [2024-10-16 09:29:40.457824] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:22.942 [2024-10-16 09:29:40.457841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97016 len:8 PRP1 0x0 PRP2 0x0 00:15:22.942 [2024-10-16 09:29:40.457856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.942 [2024-10-16 09:29:40.457871] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:22.942 [2024-10-16 09:29:40.457881] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:22.942 [2024-10-16 09:29:40.457891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97024 len:8 PRP1 0x0 PRP2 0x0 00:15:22.942 [2024-10-16 09:29:40.457905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.942 [2024-10-16 09:29:40.457919] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:22.942 [2024-10-16 09:29:40.457929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:22.942 [2024-10-16 09:29:40.457940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97032 len:8 PRP1 0x0 PRP2 0x0 00:15:22.942 [2024-10-16 09:29:40.457953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.942 [2024-10-16 09:29:40.457967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:22.942 [2024-10-16 09:29:40.457977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:22.942 [2024-10-16 09:29:40.457988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97040 len:8 PRP1 0x0 PRP2 0x0 00:15:22.942 [2024-10-16 09:29:40.458001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.942 [2024-10-16 09:29:40.458015] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:22.942 [2024-10-16 09:29:40.458025] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:22.942 [2024-10-16 09:29:40.458041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97048 len:8 PRP1 0x0 PRP2 0x0 00:15:22.942 [2024-10-16 09:29:40.458055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.942 [2024-10-16 09:29:40.458069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:22.942 [2024-10-16 09:29:40.458079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:22.942 [2024-10-16 09:29:40.458089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97056 len:8 PRP1 0x0 PRP2 0x0 00:15:22.942 [2024-10-16 09:29:40.458103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.942 [2024-10-16 09:29:40.458122] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:22.942 [2024-10-16 09:29:40.458132] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:22.942 [2024-10-16 09:29:40.458143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97064 len:8 PRP1 0x0 PRP2 0x0 00:15:22.942 [2024-10-16 09:29:40.458156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.942 [2024-10-16 09:29:40.458170] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:22.942 [2024-10-16 09:29:40.458180] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:22.942 [2024-10-16 09:29:40.458191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97072 len:8 PRP1 0x0 PRP2 0x0 00:15:22.942 [2024-10-16 09:29:40.458204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.942 [2024-10-16 09:29:40.459164] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1208030 was disconnected and freed. reset controller. 00:15:22.942 [2024-10-16 09:29:40.459205] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:15:22.942 [2024-10-16 09:29:40.459263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:22.942 [2024-10-16 09:29:40.459285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.942 [2024-10-16 09:29:40.459302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:22.943 [2024-10-16 09:29:40.459316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.943 [2024-10-16 09:29:40.459330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:22.943 [2024-10-16 09:29:40.459343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.943 [2024-10-16 09:29:40.459358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:22.943 [2024-10-16 09:29:40.459371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.943 [2024-10-16 09:29:40.459385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:22.943 [2024-10-16 09:29:40.459433] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11952e0 (9): Bad file descriptor 00:15:22.943 [2024-10-16 09:29:40.463239] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:22.943 [2024-10-16 09:29:40.502406] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:22.943 9748.80 IOPS, 38.08 MiB/s [2024-10-16T09:29:47.347Z] 9784.00 IOPS, 38.22 MiB/s [2024-10-16T09:29:47.347Z] 9815.33 IOPS, 38.34 MiB/s [2024-10-16T09:29:47.347Z] 9854.15 IOPS, 38.49 MiB/s [2024-10-16T09:29:47.347Z] 9881.71 IOPS, 38.60 MiB/s [2024-10-16T09:29:47.347Z] 9905.07 IOPS, 38.69 MiB/s 00:15:22.943 Latency(us) 00:15:22.943 [2024-10-16T09:29:47.347Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:22.943 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:22.943 Verification LBA range: start 0x0 length 0x4000 00:15:22.943 NVMe0n1 : 15.01 9905.35 38.69 241.23 0.00 12586.20 528.76 16324.42 00:15:22.943 [2024-10-16T09:29:47.347Z] =================================================================================================================== 00:15:22.943 [2024-10-16T09:29:47.347Z] Total : 9905.35 38.69 241.23 0.00 12586.20 528.76 16324.42 00:15:22.943 Received shutdown signal, test time was about 15.000000 seconds 00:15:22.943 00:15:22.943 Latency(us) 00:15:22.943 [2024-10-16T09:29:47.347Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:22.943 [2024-10-16T09:29:47.347Z] =================================================================================================================== 00:15:22.943 [2024-10-16T09:29:47.347Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:22.943 09:29:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:15:22.943 09:29:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:15:22.943 09:29:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:15:22.943 09:29:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=74947 00:15:22.943 09:29:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 74947 /var/tmp/bdevperf.sock 00:15:22.943 09:29:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:15:22.943 09:29:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 74947 ']' 00:15:22.943 09:29:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:22.943 09:29:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:22.943 09:29:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:22.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:22.943 09:29:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:22.943 09:29:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:22.943 09:29:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:22.943 09:29:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:15:22.943 09:29:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:15:22.943 [2024-10-16 09:29:46.900855] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:15:22.943 09:29:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:15:22.943 [2024-10-16 09:29:47.185116] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:15:22.943 09:29:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:23.202 NVMe0n1 00:15:23.202 09:29:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:23.461 00:15:23.720 09:29:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:23.978 00:15:23.978 09:29:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:23.978 09:29:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:15:24.237 09:29:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:24.495 09:29:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:15:27.788 09:29:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:27.788 09:29:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:15:27.788 09:29:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=75012 00:15:27.788 09:29:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:27.788 09:29:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 75012 00:15:29.166 { 00:15:29.166 "results": [ 00:15:29.166 { 00:15:29.166 "job": "NVMe0n1", 00:15:29.166 "core_mask": "0x1", 00:15:29.166 "workload": "verify", 00:15:29.166 "status": "finished", 00:15:29.166 "verify_range": { 00:15:29.166 "start": 0, 00:15:29.166 "length": 16384 00:15:29.166 }, 00:15:29.166 "queue_depth": 128, 00:15:29.166 "io_size": 4096, 00:15:29.166 "runtime": 1.008237, 00:15:29.166 "iops": 9043.508619501168, 00:15:29.166 "mibps": 35.32620554492644, 00:15:29.166 "io_failed": 0, 00:15:29.166 "io_timeout": 0, 00:15:29.166 "avg_latency_us": 14070.249651239306, 00:15:29.166 "min_latency_us": 1459.6654545454546, 00:15:29.166 "max_latency_us": 15371.17090909091 00:15:29.166 } 00:15:29.166 ], 00:15:29.166 "core_count": 1 00:15:29.166 } 00:15:29.166 09:29:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:29.166 [2024-10-16 09:29:46.397074] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:15:29.166 [2024-10-16 09:29:46.397174] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74947 ] 00:15:29.166 [2024-10-16 09:29:46.534303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.166 [2024-10-16 09:29:46.579363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.166 [2024-10-16 09:29:46.632228] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:29.166 [2024-10-16 09:29:48.723111] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:15:29.166 [2024-10-16 09:29:48.723212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:29.166 [2024-10-16 09:29:48.723237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:29.166 [2024-10-16 09:29:48.723255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:29.166 [2024-10-16 09:29:48.723268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:29.166 [2024-10-16 09:29:48.723281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:29.166 [2024-10-16 09:29:48.723294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:29.166 [2024-10-16 09:29:48.723307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:29.166 [2024-10-16 09:29:48.723319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:29.166 [2024-10-16 09:29:48.723331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:29.166 [2024-10-16 09:29:48.723376] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:29.166 [2024-10-16 09:29:48.723406] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21db2e0 (9): Bad file descriptor 00:15:29.166 [2024-10-16 09:29:48.727432] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:29.166 Running I/O for 1 seconds... 00:15:29.166 8990.00 IOPS, 35.12 MiB/s 00:15:29.166 Latency(us) 00:15:29.166 [2024-10-16T09:29:53.570Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:29.166 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:29.166 Verification LBA range: start 0x0 length 0x4000 00:15:29.166 NVMe0n1 : 1.01 9043.51 35.33 0.00 0.00 14070.25 1459.67 15371.17 00:15:29.166 [2024-10-16T09:29:53.570Z] =================================================================================================================== 00:15:29.166 [2024-10-16T09:29:53.570Z] Total : 9043.51 35.33 0.00 0.00 14070.25 1459.67 15371.17 00:15:29.166 09:29:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:29.166 09:29:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:15:29.166 09:29:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:29.425 09:29:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:29.425 09:29:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:15:29.684 09:29:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:29.943 09:29:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:15:33.232 09:29:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:33.232 09:29:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:15:33.232 09:29:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 74947 00:15:33.232 09:29:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 74947 ']' 00:15:33.232 09:29:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 74947 00:15:33.232 09:29:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:15:33.232 09:29:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:33.232 09:29:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74947 00:15:33.232 09:29:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:33.232 09:29:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:33.232 killing process with pid 74947 00:15:33.232 09:29:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74947' 00:15:33.232 09:29:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 74947 00:15:33.232 09:29:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 74947 00:15:33.491 09:29:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:15:33.491 09:29:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:33.764 09:29:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:15:33.764 09:29:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:33.764 09:29:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:15:33.764 09:29:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # nvmfcleanup 00:15:33.764 09:29:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:15:33.764 09:29:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:33.764 09:29:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:15:33.764 09:29:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:33.764 09:29:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:33.764 rmmod nvme_tcp 00:15:33.764 rmmod nvme_fabrics 00:15:33.764 rmmod nvme_keyring 00:15:33.764 09:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:33.764 09:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:15:33.764 09:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:15:33.764 09:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@515 -- # '[' -n 74695 ']' 00:15:33.764 09:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # killprocess 74695 00:15:33.764 09:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 74695 ']' 00:15:33.764 09:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 74695 00:15:33.764 09:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:15:33.764 09:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:33.764 09:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74695 00:15:33.764 09:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:33.764 09:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:33.764 killing process with pid 74695 00:15:33.764 09:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74695' 00:15:33.764 09:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 74695 00:15:33.764 09:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 74695 00:15:34.037 09:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:15:34.037 09:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:15:34.037 09:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:15:34.037 09:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:15:34.037 09:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-save 00:15:34.037 09:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:15:34.037 09:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-restore 00:15:34.037 09:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:34.037 09:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:34.037 09:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:34.037 09:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:34.037 09:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:34.037 09:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:34.037 09:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:34.037 09:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:34.037 09:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:34.037 09:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:34.037 09:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:34.296 09:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:34.296 09:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:34.296 09:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:34.296 09:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:34.296 09:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:34.296 09:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:34.296 09:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:34.296 09:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:34.296 09:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:15:34.296 00:15:34.296 real 0m31.602s 00:15:34.296 user 2m1.263s 00:15:34.296 sys 0m5.752s 00:15:34.296 09:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:34.296 09:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:34.296 ************************************ 00:15:34.296 END TEST nvmf_failover 00:15:34.296 ************************************ 00:15:34.296 09:29:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:34.296 09:29:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:34.296 09:29:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:34.296 09:29:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:34.296 ************************************ 00:15:34.296 START TEST nvmf_host_discovery 00:15:34.296 ************************************ 00:15:34.296 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:34.556 * Looking for test storage... 00:15:34.556 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:34.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.556 --rc genhtml_branch_coverage=1 00:15:34.556 --rc genhtml_function_coverage=1 00:15:34.556 --rc genhtml_legend=1 00:15:34.556 --rc geninfo_all_blocks=1 00:15:34.556 --rc geninfo_unexecuted_blocks=1 00:15:34.556 00:15:34.556 ' 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:34.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.556 --rc genhtml_branch_coverage=1 00:15:34.556 --rc genhtml_function_coverage=1 00:15:34.556 --rc genhtml_legend=1 00:15:34.556 --rc geninfo_all_blocks=1 00:15:34.556 --rc geninfo_unexecuted_blocks=1 00:15:34.556 00:15:34.556 ' 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:34.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.556 --rc genhtml_branch_coverage=1 00:15:34.556 --rc genhtml_function_coverage=1 00:15:34.556 --rc genhtml_legend=1 00:15:34.556 --rc geninfo_all_blocks=1 00:15:34.556 --rc geninfo_unexecuted_blocks=1 00:15:34.556 00:15:34.556 ' 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:34.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.556 --rc genhtml_branch_coverage=1 00:15:34.556 --rc genhtml_function_coverage=1 00:15:34.556 --rc genhtml_legend=1 00:15:34.556 --rc geninfo_all_blocks=1 00:15:34.556 --rc geninfo_unexecuted_blocks=1 00:15:34.556 00:15:34.556 ' 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5989d9e2-d339-420e-a2f4-bd87604f111f 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:34.556 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:34.557 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@458 -- # nvmf_veth_init 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:34.557 Cannot find device "nvmf_init_br" 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:34.557 Cannot find device "nvmf_init_br2" 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:34.557 Cannot find device "nvmf_tgt_br" 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:34.557 Cannot find device "nvmf_tgt_br2" 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:34.557 Cannot find device "nvmf_init_br" 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:34.557 Cannot find device "nvmf_init_br2" 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:34.557 Cannot find device "nvmf_tgt_br" 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:34.557 Cannot find device "nvmf_tgt_br2" 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:34.557 Cannot find device "nvmf_br" 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:34.557 Cannot find device "nvmf_init_if" 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:34.557 Cannot find device "nvmf_init_if2" 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:34.557 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:15:34.557 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:34.816 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:34.816 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:15:34.816 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:34.816 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:34.816 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:34.816 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:34.816 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:34.816 09:29:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:34.816 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:34.816 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:34.816 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:34.816 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:34.816 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:34.816 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:34.816 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:34.816 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:34.816 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:34.816 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:34.816 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:34.817 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:34.817 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:34.817 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:34.817 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:34.817 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:34.817 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:34.817 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:34.817 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:34.817 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:34.817 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:34.817 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:34.817 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:34.817 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:34.817 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:34.817 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:34.817 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:34.817 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:34.817 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:15:34.817 00:15:34.817 --- 10.0.0.3 ping statistics --- 00:15:34.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.817 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:15:34.817 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:34.817 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:34.817 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:15:34.817 00:15:34.817 --- 10.0.0.4 ping statistics --- 00:15:34.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.817 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:15:34.817 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:34.817 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:34.817 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:15:34.817 00:15:34.817 --- 10.0.0.1 ping statistics --- 00:15:34.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.817 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:15:34.817 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:34.817 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:34.817 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:15:34.817 00:15:34.817 --- 10.0.0.2 ping statistics --- 00:15:34.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.817 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:15:34.817 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:34.817 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # return 0 00:15:34.817 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:15:34.817 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:34.817 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:15:34.817 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:15:34.817 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:34.817 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:15:34.817 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:15:34.817 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:15:34.817 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:15:34.817 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:34.817 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:34.817 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # nvmfpid=75336 00:15:34.817 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:34.817 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # waitforlisten 75336 00:15:34.817 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 75336 ']' 00:15:34.817 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.817 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:34.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.817 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.817 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:34.817 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.076 [2024-10-16 09:29:59.279049] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:15:35.076 [2024-10-16 09:29:59.279140] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:35.076 [2024-10-16 09:29:59.416728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:35.076 [2024-10-16 09:29:59.460502] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:35.076 [2024-10-16 09:29:59.460587] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:35.076 [2024-10-16 09:29:59.460598] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:35.076 [2024-10-16 09:29:59.460606] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:35.076 [2024-10-16 09:29:59.460613] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:35.076 [2024-10-16 09:29:59.461028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:35.335 [2024-10-16 09:29:59.514585] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:35.335 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:35.335 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:15:35.335 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:15:35.335 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:35.335 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.335 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:35.335 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:35.335 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.335 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.335 [2024-10-16 09:29:59.628535] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:35.335 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.335 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:15:35.335 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.335 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.335 [2024-10-16 09:29:59.636719] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:15:35.335 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.335 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:15:35.335 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.335 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.335 null0 00:15:35.335 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.335 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:15:35.335 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.335 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.336 null1 00:15:35.336 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.336 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:15:35.336 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.336 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.336 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.336 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=75362 00:15:35.336 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 75362 /tmp/host.sock 00:15:35.336 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 75362 ']' 00:15:35.336 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:15:35.336 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:35.336 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:15:35.336 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:15:35.336 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:35.336 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.336 09:29:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:15:35.336 [2024-10-16 09:29:59.713200] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:15:35.336 [2024-10-16 09:29:59.713284] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75362 ] 00:15:35.595 [2024-10-16 09:29:59.846441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:35.595 [2024-10-16 09:29:59.899363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.595 [2024-10-16 09:29:59.956108] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:35.854 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:35.854 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:15:35.854 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:35.854 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:15:35.854 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.854 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.854 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.854 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:15:35.854 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.854 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.854 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.854 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:15:35.854 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:15:35.854 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:35.854 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:35.854 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.854 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.854 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:35.854 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:35.854 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.854 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:15:35.854 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:15:35.854 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:35.854 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.854 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.854 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:35.854 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:35.854 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:35.854 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.854 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:15:35.854 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:15:35.854 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.854 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.854 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.854 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:15:35.854 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:35.854 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.854 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.854 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:35.854 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:35.854 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:35.854 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.854 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:15:35.854 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:15:35.854 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:35.854 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.854 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:35.854 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.854 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:35.854 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:35.854 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:15:36.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:15:36.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:36.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:15:36.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:36.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:36.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:36.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:36.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:36.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:15:36.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:15:36.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:36.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:36.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:36.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:36.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:36.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:15:36.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:36.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:36.113 [2024-10-16 09:30:00.396862] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:36.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:15:36.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:36.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:36.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:36.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:36.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:36.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:15:36.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:15:36.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:36.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:36.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:36.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:36.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:36.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:15:36.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:15:36.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:36.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:36.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:36.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:36.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:36.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:36.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:15:36.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:36.113 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:36.114 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.114 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:36.372 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.372 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:36.372 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:15:36.372 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:15:36.372 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:36.372 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:15:36.372 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.372 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:36.372 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.372 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:36.372 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:36.372 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:36.372 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:36.372 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:36.372 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:15:36.372 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:36.372 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:36.373 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:36.373 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:36.373 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.373 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:36.373 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.373 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:15:36.373 09:30:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:15:36.939 [2024-10-16 09:30:01.041620] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:15:36.939 [2024-10-16 09:30:01.041665] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:15:36.939 [2024-10-16 09:30:01.041700] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:15:36.939 [2024-10-16 09:30:01.047640] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:15:36.939 [2024-10-16 09:30:01.105535] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:15:36.939 [2024-10-16 09:30:01.105618] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:15:37.507 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:37.507 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:37.507 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:15:37.507 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:37.507 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:37.507 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.507 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:37.507 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:37.507 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:37.507 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.507 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.507 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:37.507 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:15:37.507 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:15:37.507 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:37.507 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:37.507 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:15:37.507 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:15:37.507 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:37.507 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:37.507 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.507 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:37.507 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:37.507 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:37.508 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.508 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:15:37.508 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:37.508 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:15:37.508 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:15:37.508 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:37.508 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:37.508 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:15:37.508 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:15:37.508 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:37.508 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.508 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:37.508 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:37.508 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:37.508 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:37.508 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.508 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:15:37.508 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:37.508 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:15:37.508 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:15:37.508 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:37.508 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:37.508 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:37.508 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:37.508 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:37.508 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:15:37.508 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:37.508 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.508 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:37.508 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:37.508 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.508 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:15:37.508 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:15:37.508 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:15:37.508 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:37.508 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:15:37.508 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.508 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:37.508 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.508 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:37.508 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:37.508 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:37.508 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:37.508 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:37.508 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:15:37.508 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:37.508 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:37.508 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:37.508 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.508 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:37.508 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:37.508 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.508 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:37.508 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:37.767 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:15:37.767 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:15:37.767 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:37.767 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:37.767 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:37.767 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:37.767 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:37.767 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:15:37.767 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:15:37.767 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.767 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:37.767 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:37.767 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.767 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:15:37.767 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:37.767 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:15:37.767 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:37.767 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:15:37.767 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.767 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:37.767 [2024-10-16 09:30:01.970334] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:15:37.768 [2024-10-16 09:30:01.970796] bdev_nvme.c:7135:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:15:37.768 [2024-10-16 09:30:01.970823] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:15:37.768 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.768 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:37.768 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:37.768 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:37.768 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:37.768 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:37.768 [2024-10-16 09:30:01.976805] bdev_nvme.c:7077:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:15:37.768 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:15:37.768 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:37.768 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:37.768 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:37.768 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:37.768 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.768 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:37.768 09:30:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.768 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.768 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:37.768 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:37.768 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:37.768 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:37.768 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:37.768 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:37.768 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:15:37.768 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:37.768 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.768 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:37.768 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:37.768 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:37.768 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:37.768 [2024-10-16 09:30:02.035232] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:15:37.768 [2024-10-16 09:30:02.035250] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:15:37.768 [2024-10-16 09:30:02.035257] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:15:37.768 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.768 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:37.768 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:37.768 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:15:37.768 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:15:37.768 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:37.768 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:37.768 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:15:37.768 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:15:37.768 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:37.768 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.768 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:37.768 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:37.768 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:37.768 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:37.768 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.768 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:15:37.768 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:37.768 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:15:37.768 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:37.768 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:37.768 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:37.768 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:37.768 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:37.768 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:37.768 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:15:37.768 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:37.768 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.768 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:37.768 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:37.768 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.028 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:38.028 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:38.028 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:15:38.028 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:38.028 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:38.028 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.028 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:38.028 [2024-10-16 09:30:02.207095] bdev_nvme.c:7135:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:15:38.028 [2024-10-16 09:30:02.207138] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:15:38.028 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.028 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:38.028 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:38.028 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:38.028 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:38.028 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:38.028 [2024-10-16 09:30:02.212165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:38.028 [2024-10-16 09:30:02.212197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.028 [2024-10-16 09:30:02.212209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:38.028 [2024-10-16 09:30:02.212217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.028 [2024-10-16 09:30:02.212227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:38.028 [2024-10-16 09:30:02.212235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.028 [2024-10-16 09:30:02.212244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:38.028 [2024-10-16 09:30:02.212253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.028 [2024-10-16 09:30:02.212276] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f3950 is same with the state(6) to be set 00:15:38.028 [2024-10-16 09:30:02.213113] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:15:38.028 [2024-10-16 09:30:02.213137] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:15:38.028 [2024-10-16 09:30:02.213183] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9f3950 (9): Bad file descriptor 00:15:38.028 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:15:38.028 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:38.028 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:38.029 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.029 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:38.029 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:38.029 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:38.029 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.029 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.029 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:38.029 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:38.029 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:38.029 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:38.029 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:38.029 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:38.029 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:15:38.029 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:38.029 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:38.029 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:38.029 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.029 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:38.029 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:38.029 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.029 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:38.029 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:38.029 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:15:38.029 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:15:38.029 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:38.029 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:38.029 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:15:38.029 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:15:38.029 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:38.029 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:38.029 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:38.029 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.029 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:38.029 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:38.029 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.029 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:15:38.029 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:38.029 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:15:38.029 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:38.029 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:38.029 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:38.029 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:38.029 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:38.029 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:38.029 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:15:38.029 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:38.029 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:38.029 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.029 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:38.029 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.029 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:38.029 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:38.029 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:15:38.029 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:38.029 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:15:38.029 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.029 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:38.288 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.288 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:15:38.288 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:15:38.288 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:38.288 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:38.288 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:15:38.288 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:15:38.288 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:38.288 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:38.288 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.288 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:38.288 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:38.288 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:38.288 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.288 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:15:38.288 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:38.288 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:15:38.288 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:15:38.288 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:38.288 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:38.289 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:15:38.289 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:15:38.289 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:38.289 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.289 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:38.289 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:38.289 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:38.289 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:38.289 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.289 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:15:38.289 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:38.289 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:15:38.289 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:15:38.289 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:38.289 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:38.289 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:15:38.289 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:15:38.289 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:38.289 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:15:38.289 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:38.289 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:38.289 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.289 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:38.289 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.289 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:15:38.289 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:15:38.289 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:15:38.289 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:15:38.289 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:38.289 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.289 09:30:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:39.226 [2024-10-16 09:30:03.622223] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:15:39.226 [2024-10-16 09:30:03.623469] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:15:39.226 [2024-10-16 09:30:03.623502] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:15:39.226 [2024-10-16 09:30:03.628590] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:15:39.487 [2024-10-16 09:30:03.690606] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:15:39.487 [2024-10-16 09:30:03.690820] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:15:39.487 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.487 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:39.487 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:15:39.487 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:39.487 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:39.487 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:39.487 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:39.487 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:39.487 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:39.487 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.487 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:39.487 request: 00:15:39.487 { 00:15:39.487 "name": "nvme", 00:15:39.487 "trtype": "tcp", 00:15:39.487 "traddr": "10.0.0.3", 00:15:39.487 "adrfam": "ipv4", 00:15:39.487 "trsvcid": "8009", 00:15:39.487 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:39.487 "wait_for_attach": true, 00:15:39.487 "method": "bdev_nvme_start_discovery", 00:15:39.487 "req_id": 1 00:15:39.487 } 00:15:39.487 Got JSON-RPC error response 00:15:39.487 response: 00:15:39.487 { 00:15:39.487 "code": -17, 00:15:39.487 "message": "File exists" 00:15:39.487 } 00:15:39.487 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:39.487 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:15:39.487 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:39.487 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:39.487 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:39.487 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:15:39.487 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:39.487 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:39.487 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:39.487 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.487 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:39.487 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:39.487 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.487 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:15:39.487 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:15:39.487 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:39.487 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:39.487 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.487 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:39.487 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:39.487 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:39.487 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.487 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:39.487 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:39.487 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:15:39.487 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:39.487 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:39.487 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:39.487 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:39.487 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:39.488 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:39.488 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.488 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:39.488 request: 00:15:39.488 { 00:15:39.488 "name": "nvme_second", 00:15:39.488 "trtype": "tcp", 00:15:39.488 "traddr": "10.0.0.3", 00:15:39.488 "adrfam": "ipv4", 00:15:39.488 "trsvcid": "8009", 00:15:39.488 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:39.488 "wait_for_attach": true, 00:15:39.488 "method": "bdev_nvme_start_discovery", 00:15:39.488 "req_id": 1 00:15:39.488 } 00:15:39.488 Got JSON-RPC error response 00:15:39.488 response: 00:15:39.488 { 00:15:39.488 "code": -17, 00:15:39.488 "message": "File exists" 00:15:39.488 } 00:15:39.488 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:39.488 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:15:39.488 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:39.488 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:39.488 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:39.488 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:15:39.488 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:39.488 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:39.488 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:39.488 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:39.488 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.488 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:39.488 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.748 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:15:39.748 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:15:39.748 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:39.748 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:39.748 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:39.748 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.748 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:39.748 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:39.748 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.748 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:39.748 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:39.748 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:15:39.748 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:39.748 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:39.748 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:39.748 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:39.748 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:39.748 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:39.748 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.748 09:30:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:40.738 [2024-10-16 09:30:04.959222] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:40.738 [2024-10-16 09:30:04.959441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa7fb30 with addr=10.0.0.3, port=8010 00:15:40.738 [2024-10-16 09:30:04.959486] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:40.738 [2024-10-16 09:30:04.959496] nvme.c: 844:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:40.738 [2024-10-16 09:30:04.959505] bdev_nvme.c:7221:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:15:41.674 [2024-10-16 09:30:05.959207] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:41.674 [2024-10-16 09:30:05.959262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa7fb30 with addr=10.0.0.3, port=8010 00:15:41.674 [2024-10-16 09:30:05.959286] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:41.674 [2024-10-16 09:30:05.959294] nvme.c: 844:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:41.674 [2024-10-16 09:30:05.959301] bdev_nvme.c:7221:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:15:42.611 [2024-10-16 09:30:06.959126] bdev_nvme.c:7196:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:15:42.611 request: 00:15:42.611 { 00:15:42.611 "name": "nvme_second", 00:15:42.611 "trtype": "tcp", 00:15:42.611 "traddr": "10.0.0.3", 00:15:42.611 "adrfam": "ipv4", 00:15:42.611 "trsvcid": "8010", 00:15:42.612 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:42.612 "wait_for_attach": false, 00:15:42.612 "attach_timeout_ms": 3000, 00:15:42.612 "method": "bdev_nvme_start_discovery", 00:15:42.612 "req_id": 1 00:15:42.612 } 00:15:42.612 Got JSON-RPC error response 00:15:42.612 response: 00:15:42.612 { 00:15:42.612 "code": -110, 00:15:42.612 "message": "Connection timed out" 00:15:42.612 } 00:15:42.612 09:30:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:42.612 09:30:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:15:42.612 09:30:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:42.612 09:30:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:42.612 09:30:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:42.612 09:30:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:15:42.612 09:30:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:42.612 09:30:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:42.612 09:30:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.612 09:30:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.612 09:30:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:42.612 09:30:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:42.612 09:30:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.871 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:15:42.871 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:15:42.871 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 75362 00:15:42.871 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:15:42.871 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:15:42.871 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:15:42.871 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:42.871 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:15:42.871 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:42.871 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:42.871 rmmod nvme_tcp 00:15:42.871 rmmod nvme_fabrics 00:15:42.871 rmmod nvme_keyring 00:15:42.871 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:42.871 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:15:42.871 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:15:42.871 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@515 -- # '[' -n 75336 ']' 00:15:42.871 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # killprocess 75336 00:15:42.871 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 75336 ']' 00:15:42.871 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 75336 00:15:42.871 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:15:42.871 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:42.871 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75336 00:15:42.871 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:42.871 killing process with pid 75336 00:15:42.871 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:42.871 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75336' 00:15:42.871 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 75336 00:15:42.871 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 75336 00:15:43.129 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:15:43.129 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:15:43.129 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:15:43.129 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:15:43.129 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-save 00:15:43.129 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:15:43.129 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:15:43.129 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:43.129 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:43.129 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:43.129 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:43.129 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:43.129 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:43.129 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:43.129 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:43.129 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:43.129 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:43.129 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:43.129 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:43.129 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:43.388 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:43.388 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:43.388 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:43.388 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:43.388 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:43.388 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:43.388 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:15:43.388 00:15:43.388 real 0m8.984s 00:15:43.388 user 0m17.111s 00:15:43.388 sys 0m1.950s 00:15:43.388 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:43.388 ************************************ 00:15:43.388 END TEST nvmf_host_discovery 00:15:43.388 ************************************ 00:15:43.388 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:43.388 09:30:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:15:43.388 09:30:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:43.388 09:30:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:43.388 09:30:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:43.388 ************************************ 00:15:43.388 START TEST nvmf_host_multipath_status 00:15:43.388 ************************************ 00:15:43.388 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:15:43.388 * Looking for test storage... 00:15:43.388 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:43.388 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:43.388 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:15:43.388 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:43.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.648 --rc genhtml_branch_coverage=1 00:15:43.648 --rc genhtml_function_coverage=1 00:15:43.648 --rc genhtml_legend=1 00:15:43.648 --rc geninfo_all_blocks=1 00:15:43.648 --rc geninfo_unexecuted_blocks=1 00:15:43.648 00:15:43.648 ' 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:43.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.648 --rc genhtml_branch_coverage=1 00:15:43.648 --rc genhtml_function_coverage=1 00:15:43.648 --rc genhtml_legend=1 00:15:43.648 --rc geninfo_all_blocks=1 00:15:43.648 --rc geninfo_unexecuted_blocks=1 00:15:43.648 00:15:43.648 ' 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:43.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.648 --rc genhtml_branch_coverage=1 00:15:43.648 --rc genhtml_function_coverage=1 00:15:43.648 --rc genhtml_legend=1 00:15:43.648 --rc geninfo_all_blocks=1 00:15:43.648 --rc geninfo_unexecuted_blocks=1 00:15:43.648 00:15:43.648 ' 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:43.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.648 --rc genhtml_branch_coverage=1 00:15:43.648 --rc genhtml_function_coverage=1 00:15:43.648 --rc genhtml_legend=1 00:15:43.648 --rc geninfo_all_blocks=1 00:15:43.648 --rc geninfo_unexecuted_blocks=1 00:15:43.648 00:15:43.648 ' 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5989d9e2-d339-420e-a2f4-bd87604f111f 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.648 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:43.649 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # prepare_net_devs 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # local -g is_hw=no 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # remove_spdk_ns 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # nvmf_veth_init 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:43.649 Cannot find device "nvmf_init_br" 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:43.649 Cannot find device "nvmf_init_br2" 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:43.649 Cannot find device "nvmf_tgt_br" 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:43.649 Cannot find device "nvmf_tgt_br2" 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:43.649 Cannot find device "nvmf_init_br" 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:43.649 Cannot find device "nvmf_init_br2" 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:43.649 Cannot find device "nvmf_tgt_br" 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:43.649 Cannot find device "nvmf_tgt_br2" 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:43.649 Cannot find device "nvmf_br" 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:15:43.649 09:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:43.649 Cannot find device "nvmf_init_if" 00:15:43.649 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:15:43.649 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:43.649 Cannot find device "nvmf_init_if2" 00:15:43.649 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:15:43.649 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:43.649 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:43.649 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:15:43.649 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:43.649 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:43.649 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:15:43.649 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:43.649 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:43.649 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:43.909 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:43.909 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:43.909 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:43.909 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:43.909 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:43.909 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:43.909 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:43.909 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:43.909 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:43.909 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:43.909 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:43.909 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:43.909 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:43.909 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:43.909 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:43.909 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:43.909 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:43.909 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:43.909 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:43.909 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:43.909 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:43.909 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:43.909 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:43.909 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:43.909 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:43.909 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:43.909 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:43.909 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:43.909 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:43.909 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:43.909 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:43.909 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:15:43.909 00:15:43.909 --- 10.0.0.3 ping statistics --- 00:15:43.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.909 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:15:43.909 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:43.909 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:43.909 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:15:43.909 00:15:43.909 --- 10.0.0.4 ping statistics --- 00:15:43.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.909 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:15:43.909 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:43.909 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:43.909 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:15:43.909 00:15:43.909 --- 10.0.0.1 ping statistics --- 00:15:43.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.909 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:15:43.909 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:43.909 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:43.909 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:15:43.909 00:15:43.909 --- 10.0.0.2 ping statistics --- 00:15:43.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.909 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:15:43.909 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:43.909 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # return 0 00:15:43.909 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:15:43.909 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:43.909 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:15:43.909 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:15:43.909 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:43.909 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:15:43.909 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:15:43.909 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:15:43.909 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:15:43.909 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:43.909 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:43.909 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # nvmfpid=75848 00:15:43.909 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:15:43.909 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # waitforlisten 75848 00:15:43.909 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 75848 ']' 00:15:43.909 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:43.909 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:43.909 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:43.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:43.909 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:43.909 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:44.168 [2024-10-16 09:30:08.348853] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:15:44.168 [2024-10-16 09:30:08.348941] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:44.168 [2024-10-16 09:30:08.488784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:44.168 [2024-10-16 09:30:08.536625] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:44.168 [2024-10-16 09:30:08.536835] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:44.168 [2024-10-16 09:30:08.537005] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:44.168 [2024-10-16 09:30:08.537055] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:44.168 [2024-10-16 09:30:08.537167] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:44.168 [2024-10-16 09:30:08.538407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:44.168 [2024-10-16 09:30:08.538415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.428 [2024-10-16 09:30:08.592842] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:44.428 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:44.428 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:15:44.428 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:15:44.428 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:44.428 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:44.428 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:44.428 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=75848 00:15:44.428 09:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:44.687 [2024-10-16 09:30:08.991762] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:44.687 09:30:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:44.946 Malloc0 00:15:44.946 09:30:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:15:45.204 09:30:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:45.462 09:30:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:45.721 [2024-10-16 09:30:10.022978] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:45.721 09:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:15:45.979 [2024-10-16 09:30:10.295061] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:15:45.979 09:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:15:45.979 09:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=75896 00:15:45.979 09:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:45.979 09:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 75896 /var/tmp/bdevperf.sock 00:15:45.979 09:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 75896 ']' 00:15:45.979 09:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:45.979 09:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:45.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:45.979 09:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:45.979 09:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:45.979 09:30:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:46.915 09:30:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:46.915 09:30:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:15:46.915 09:30:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:15:47.173 09:30:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:15:47.431 Nvme0n1 00:15:47.689 09:30:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:15:47.948 Nvme0n1 00:15:47.948 09:30:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:15:47.948 09:30:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:15:49.890 09:30:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:15:49.890 09:30:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:15:50.160 09:30:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:15:50.418 09:30:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:15:51.353 09:30:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:15:51.353 09:30:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:51.353 09:30:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:51.353 09:30:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:51.612 09:30:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:51.612 09:30:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:51.612 09:30:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:51.612 09:30:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:51.870 09:30:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:51.870 09:30:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:51.870 09:30:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:51.870 09:30:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:52.129 09:30:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:52.129 09:30:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:52.129 09:30:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:52.129 09:30:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:52.695 09:30:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:52.695 09:30:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:52.695 09:30:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:52.695 09:30:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:52.695 09:30:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:52.695 09:30:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:52.695 09:30:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:52.695 09:30:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:52.953 09:30:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:52.953 09:30:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:15:52.953 09:30:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:53.211 09:30:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:15:53.469 09:30:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:15:54.404 09:30:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:15:54.404 09:30:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:15:54.404 09:30:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:54.404 09:30:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:54.662 09:30:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:54.662 09:30:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:54.662 09:30:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:54.662 09:30:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:55.229 09:30:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:55.229 09:30:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:55.229 09:30:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:55.229 09:30:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:55.229 09:30:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:55.229 09:30:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:55.229 09:30:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:55.229 09:30:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:55.488 09:30:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:55.488 09:30:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:55.488 09:30:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:55.488 09:30:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:55.746 09:30:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:55.747 09:30:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:55.747 09:30:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:55.747 09:30:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:56.005 09:30:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:56.005 09:30:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:15:56.005 09:30:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:56.264 09:30:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:15:56.522 09:30:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:15:57.460 09:30:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:15:57.460 09:30:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:57.460 09:30:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:57.460 09:30:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:58.027 09:30:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:58.027 09:30:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:58.027 09:30:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:58.027 09:30:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:58.027 09:30:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:58.027 09:30:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:58.027 09:30:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:58.027 09:30:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:58.594 09:30:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:58.594 09:30:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:58.594 09:30:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:58.594 09:30:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:58.594 09:30:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:58.594 09:30:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:58.594 09:30:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:58.594 09:30:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:58.853 09:30:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:58.853 09:30:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:58.853 09:30:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:58.853 09:30:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:59.419 09:30:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:59.419 09:30:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:15:59.419 09:30:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:59.419 09:30:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:15:59.677 09:30:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:16:01.052 09:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:16:01.052 09:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:01.052 09:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:01.052 09:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:01.052 09:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:01.052 09:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:01.052 09:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:01.052 09:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:01.311 09:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:01.311 09:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:01.311 09:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:01.311 09:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:01.570 09:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:01.570 09:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:01.570 09:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:01.570 09:30:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:01.829 09:30:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:01.829 09:30:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:01.829 09:30:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:01.829 09:30:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:02.087 09:30:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:02.087 09:30:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:02.087 09:30:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:02.087 09:30:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:02.346 09:30:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:02.346 09:30:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:16:02.346 09:30:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:16:02.604 09:30:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:16:02.862 09:30:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:16:03.796 09:30:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:16:03.796 09:30:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:03.796 09:30:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:03.796 09:30:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:04.054 09:30:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:04.054 09:30:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:04.054 09:30:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:04.054 09:30:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:04.312 09:30:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:04.312 09:30:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:04.312 09:30:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:04.312 09:30:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:04.612 09:30:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:04.612 09:30:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:04.612 09:30:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:04.612 09:30:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:04.871 09:30:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:04.871 09:30:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:04.871 09:30:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:04.871 09:30:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:05.130 09:30:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:05.130 09:30:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:05.130 09:30:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:05.130 09:30:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:05.388 09:30:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:05.388 09:30:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:16:05.388 09:30:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:16:05.647 09:30:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:05.905 09:30:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:16:06.841 09:30:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:16:06.841 09:30:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:06.841 09:30:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:06.841 09:30:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:07.409 09:30:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:07.409 09:30:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:07.409 09:30:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:07.409 09:30:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:07.409 09:30:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:07.409 09:30:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:07.409 09:30:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:07.409 09:30:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:07.668 09:30:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:07.668 09:30:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:07.668 09:30:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:07.668 09:30:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:07.926 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:07.926 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:07.926 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:07.926 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:08.184 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:08.184 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:08.184 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:08.184 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:08.442 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:08.442 09:30:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:16:08.699 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:16:08.699 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:16:08.958 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:09.217 09:30:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:16:10.152 09:30:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:16:10.152 09:30:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:10.152 09:30:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:10.152 09:30:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:10.410 09:30:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:10.410 09:30:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:10.410 09:30:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:10.410 09:30:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:10.977 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:10.977 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:10.978 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:10.978 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:10.978 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:10.978 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:10.978 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:10.978 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:11.236 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:11.236 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:11.236 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:11.236 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:11.495 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:11.495 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:11.495 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:11.495 09:30:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:11.754 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:11.754 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:16:11.754 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:12.012 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:12.270 09:30:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:16:13.205 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:16:13.205 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:13.205 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:13.205 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:13.799 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:13.799 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:13.799 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:13.799 09:30:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:13.799 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:13.799 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:13.799 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:13.799 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:14.062 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:14.062 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:14.062 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:14.062 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:14.321 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:14.321 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:14.321 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:14.321 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:14.580 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:14.580 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:14.580 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:14.580 09:30:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:14.838 09:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:14.838 09:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:16:14.838 09:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:15.097 09:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:16:15.355 09:30:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:16:16.291 09:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:16:16.291 09:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:16.291 09:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:16.291 09:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:16.550 09:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:16.550 09:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:16.550 09:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:16.550 09:30:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:17.116 09:30:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:17.116 09:30:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:17.116 09:30:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:17.116 09:30:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:17.116 09:30:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:17.116 09:30:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:17.116 09:30:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:17.116 09:30:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:17.375 09:30:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:17.375 09:30:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:17.375 09:30:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:17.375 09:30:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:17.634 09:30:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:17.634 09:30:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:17.634 09:30:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:17.634 09:30:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:17.893 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:17.893 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:16:17.893 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:18.151 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:16:18.410 09:30:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:16:19.345 09:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:16:19.345 09:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:19.345 09:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:19.345 09:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:19.604 09:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:19.604 09:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:19.604 09:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:19.604 09:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:19.863 09:30:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:19.863 09:30:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:19.863 09:30:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:19.863 09:30:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:20.429 09:30:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:20.429 09:30:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:20.429 09:30:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:20.429 09:30:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:20.429 09:30:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:20.429 09:30:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:20.429 09:30:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:20.429 09:30:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:20.688 09:30:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:20.688 09:30:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:20.688 09:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:20.688 09:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:20.949 09:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:20.949 09:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 75896 00:16:20.949 09:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 75896 ']' 00:16:20.949 09:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 75896 00:16:20.949 09:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:16:20.949 09:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:20.949 09:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75896 00:16:20.949 killing process with pid 75896 00:16:20.949 09:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:16:20.949 09:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:16:20.949 09:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75896' 00:16:20.949 09:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 75896 00:16:20.949 09:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 75896 00:16:20.949 { 00:16:20.949 "results": [ 00:16:20.949 { 00:16:20.949 "job": "Nvme0n1", 00:16:20.949 "core_mask": "0x4", 00:16:20.949 "workload": "verify", 00:16:20.949 "status": "terminated", 00:16:20.949 "verify_range": { 00:16:20.949 "start": 0, 00:16:20.949 "length": 16384 00:16:20.949 }, 00:16:20.949 "queue_depth": 128, 00:16:20.949 "io_size": 4096, 00:16:20.949 "runtime": 33.018333, 00:16:20.949 "iops": 9337.35812768016, 00:16:20.949 "mibps": 36.474055186250624, 00:16:20.949 "io_failed": 0, 00:16:20.949 "io_timeout": 0, 00:16:20.949 "avg_latency_us": 13680.966891898906, 00:16:20.949 "min_latency_us": 629.2945454545454, 00:16:20.949 "max_latency_us": 4026531.84 00:16:20.949 } 00:16:20.949 ], 00:16:20.949 "core_count": 1 00:16:20.949 } 00:16:21.231 09:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 75896 00:16:21.231 09:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:21.231 [2024-10-16 09:30:10.369968] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:16:21.231 [2024-10-16 09:30:10.370072] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75896 ] 00:16:21.231 [2024-10-16 09:30:10.512167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.231 [2024-10-16 09:30:10.572790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:21.231 [2024-10-16 09:30:10.629168] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:21.231 Running I/O for 90 seconds... 00:16:21.231 8084.00 IOPS, 31.58 MiB/s [2024-10-16T09:30:45.635Z] 8018.00 IOPS, 31.32 MiB/s [2024-10-16T09:30:45.635Z] 8028.33 IOPS, 31.36 MiB/s [2024-10-16T09:30:45.635Z] 8009.00 IOPS, 31.29 MiB/s [2024-10-16T09:30:45.635Z] 7965.80 IOPS, 31.12 MiB/s [2024-10-16T09:30:45.635Z] 8256.83 IOPS, 32.25 MiB/s [2024-10-16T09:30:45.635Z] 8571.00 IOPS, 33.48 MiB/s [2024-10-16T09:30:45.635Z] 8780.62 IOPS, 34.30 MiB/s [2024-10-16T09:30:45.635Z] 8981.78 IOPS, 35.09 MiB/s [2024-10-16T09:30:45.635Z] 9154.00 IOPS, 35.76 MiB/s [2024-10-16T09:30:45.635Z] 9284.73 IOPS, 36.27 MiB/s [2024-10-16T09:30:45.635Z] 9399.00 IOPS, 36.71 MiB/s [2024-10-16T09:30:45.635Z] 9510.46 IOPS, 37.15 MiB/s [2024-10-16T09:30:45.635Z] 9585.43 IOPS, 37.44 MiB/s [2024-10-16T09:30:45.635Z] [2024-10-16 09:30:26.816428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:67936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.231 [2024-10-16 09:30:26.816509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:21.231 [2024-10-16 09:30:26.816602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:67944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.231 [2024-10-16 09:30:26.816625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:21.231 [2024-10-16 09:30:26.816648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:67952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.231 [2024-10-16 09:30:26.816663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:21.231 [2024-10-16 09:30:26.816683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:67960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.231 [2024-10-16 09:30:26.816698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:21.231 [2024-10-16 09:30:26.816718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:67968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.231 [2024-10-16 09:30:26.816733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:21.231 [2024-10-16 09:30:26.816753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:67976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.231 [2024-10-16 09:30:26.816767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:21.231 [2024-10-16 09:30:26.816787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:67984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.231 [2024-10-16 09:30:26.816802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:21.231 [2024-10-16 09:30:26.816822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:67992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.231 [2024-10-16 09:30:26.816836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:21.231 [2024-10-16 09:30:26.816857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:67488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.231 [2024-10-16 09:30:26.816871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:21.231 [2024-10-16 09:30:26.816929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:67496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.231 [2024-10-16 09:30:26.816946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:21.231 [2024-10-16 09:30:26.816965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.231 [2024-10-16 09:30:26.816979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:21.231 [2024-10-16 09:30:26.816999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:67512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.231 [2024-10-16 09:30:26.817014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:21.231 [2024-10-16 09:30:26.817034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:67520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.231 [2024-10-16 09:30:26.817048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:21.231 [2024-10-16 09:30:26.817067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:67528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.231 [2024-10-16 09:30:26.817082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:21.231 [2024-10-16 09:30:26.817101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.231 [2024-10-16 09:30:26.817115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:21.231 [2024-10-16 09:30:26.817134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:67544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.231 [2024-10-16 09:30:26.817148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:21.231 [2024-10-16 09:30:26.817186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:68000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.231 [2024-10-16 09:30:26.817206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:21.231 [2024-10-16 09:30:26.817227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:68008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.231 [2024-10-16 09:30:26.817241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:21.232 [2024-10-16 09:30:26.817302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:68016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.232 [2024-10-16 09:30:26.817319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:21.232 [2024-10-16 09:30:26.817340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:68024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.232 [2024-10-16 09:30:26.817355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:21.232 [2024-10-16 09:30:26.817375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:68032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.232 [2024-10-16 09:30:26.817390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:21.232 [2024-10-16 09:30:26.817422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:68040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.232 [2024-10-16 09:30:26.817439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:21.232 [2024-10-16 09:30:26.817460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:68048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.232 [2024-10-16 09:30:26.817475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:21.232 [2024-10-16 09:30:26.817496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:68056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.232 [2024-10-16 09:30:26.817511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:21.232 [2024-10-16 09:30:26.817531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:68064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.232 [2024-10-16 09:30:26.817546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:21.232 [2024-10-16 09:30:26.817580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:68072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.232 [2024-10-16 09:30:26.817616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:21.232 [2024-10-16 09:30:26.817638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:68080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.232 [2024-10-16 09:30:26.817654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:21.232 [2024-10-16 09:30:26.817676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:68088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.232 [2024-10-16 09:30:26.817723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:21.232 [2024-10-16 09:30:26.817744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:68096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.232 [2024-10-16 09:30:26.817759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:21.232 [2024-10-16 09:30:26.817781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:68104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.232 [2024-10-16 09:30:26.817796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:21.232 [2024-10-16 09:30:26.817827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:68112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.232 [2024-10-16 09:30:26.817858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:21.232 [2024-10-16 09:30:26.817878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:68120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.232 [2024-10-16 09:30:26.817894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:21.232 [2024-10-16 09:30:26.817914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:67552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.232 [2024-10-16 09:30:26.817930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:21.232 [2024-10-16 09:30:26.817950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:67560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.232 [2024-10-16 09:30:26.817974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:21.232 [2024-10-16 09:30:26.818011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:67568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.232 [2024-10-16 09:30:26.818027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:21.232 [2024-10-16 09:30:26.818047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:67576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.232 [2024-10-16 09:30:26.818062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:21.232 [2024-10-16 09:30:26.818081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:67584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.232 [2024-10-16 09:30:26.818096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:21.232 [2024-10-16 09:30:26.818116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:67592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.232 [2024-10-16 09:30:26.818131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:21.232 [2024-10-16 09:30:26.818150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:67600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.232 [2024-10-16 09:30:26.818165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:21.232 [2024-10-16 09:30:26.818184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:67608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.232 [2024-10-16 09:30:26.818199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:21.232 [2024-10-16 09:30:26.818219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:68128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.232 [2024-10-16 09:30:26.818234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:21.232 [2024-10-16 09:30:26.818269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:68136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.232 [2024-10-16 09:30:26.818284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:21.232 [2024-10-16 09:30:26.818303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.232 [2024-10-16 09:30:26.818318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:21.232 [2024-10-16 09:30:26.818338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:68152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.232 [2024-10-16 09:30:26.818352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:21.232 [2024-10-16 09:30:26.818372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:68160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.232 [2024-10-16 09:30:26.818386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:21.232 [2024-10-16 09:30:26.818405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:68168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.232 [2024-10-16 09:30:26.818427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:21.232 [2024-10-16 09:30:26.818447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:68176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.232 [2024-10-16 09:30:26.818462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:21.232 [2024-10-16 09:30:26.818482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:68184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.232 [2024-10-16 09:30:26.818497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:21.232 [2024-10-16 09:30:26.818531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:68192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.232 [2024-10-16 09:30:26.818566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:21.232 [2024-10-16 09:30:26.818587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:68200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.232 [2024-10-16 09:30:26.818602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:21.232 [2024-10-16 09:30:26.818622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:68208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.232 [2024-10-16 09:30:26.818637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:21.232 [2024-10-16 09:30:26.818670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:68216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.232 [2024-10-16 09:30:26.818702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:21.232 [2024-10-16 09:30:26.818722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:68224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.232 [2024-10-16 09:30:26.818738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:21.232 [2024-10-16 09:30:26.818758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:68232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.232 [2024-10-16 09:30:26.818774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:21.233 [2024-10-16 09:30:26.818794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:68240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.233 [2024-10-16 09:30:26.818809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:21.233 [2024-10-16 09:30:26.818829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:68248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.233 [2024-10-16 09:30:26.818852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:21.233 [2024-10-16 09:30:26.818872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:68256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.233 [2024-10-16 09:30:26.818903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:21.233 [2024-10-16 09:30:26.818923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:68264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.233 [2024-10-16 09:30:26.818938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:21.233 [2024-10-16 09:30:26.818967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:68272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.233 [2024-10-16 09:30:26.818983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:21.233 [2024-10-16 09:30:26.819003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:68280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.233 [2024-10-16 09:30:26.819018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:21.233 [2024-10-16 09:30:26.819038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:67616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.233 [2024-10-16 09:30:26.819052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:21.233 [2024-10-16 09:30:26.819073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:67624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.233 [2024-10-16 09:30:26.819088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:21.233 [2024-10-16 09:30:26.819122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:67632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.233 [2024-10-16 09:30:26.819137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:21.233 [2024-10-16 09:30:26.819156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:67640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.233 [2024-10-16 09:30:26.819170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:21.233 [2024-10-16 09:30:26.819190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:67648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.233 [2024-10-16 09:30:26.819204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:21.233 [2024-10-16 09:30:26.819223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:67656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.233 [2024-10-16 09:30:26.819237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:21.233 [2024-10-16 09:30:26.819256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:67664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.233 [2024-10-16 09:30:26.819271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:21.233 [2024-10-16 09:30:26.819290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:67672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.233 [2024-10-16 09:30:26.819304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:21.233 [2024-10-16 09:30:26.819323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:67680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.233 [2024-10-16 09:30:26.819337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:21.233 [2024-10-16 09:30:26.819356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:67688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.233 [2024-10-16 09:30:26.819371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:21.233 [2024-10-16 09:30:26.819397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:67696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.233 [2024-10-16 09:30:26.819412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:21.233 [2024-10-16 09:30:26.819432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:67704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.233 [2024-10-16 09:30:26.819446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:21.233 [2024-10-16 09:30:26.819465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:67712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.233 [2024-10-16 09:30:26.819480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:21.233 [2024-10-16 09:30:26.819500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:67720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.233 [2024-10-16 09:30:26.819515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:21.233 [2024-10-16 09:30:26.819534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:67728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.233 [2024-10-16 09:30:26.819564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:21.233 [2024-10-16 09:30:26.819585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:67736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.233 [2024-10-16 09:30:26.819610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:21.233 [2024-10-16 09:30:26.819634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:68288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.233 [2024-10-16 09:30:26.819666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:21.233 [2024-10-16 09:30:26.819687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:68296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.233 [2024-10-16 09:30:26.819702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:21.233 [2024-10-16 09:30:26.819723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:68304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.233 [2024-10-16 09:30:26.819738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:21.233 [2024-10-16 09:30:26.819760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:68312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.233 [2024-10-16 09:30:26.819776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:21.233 [2024-10-16 09:30:26.819799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:68320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.233 [2024-10-16 09:30:26.819815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:21.233 [2024-10-16 09:30:26.819836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:68328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.233 [2024-10-16 09:30:26.819851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:21.233 [2024-10-16 09:30:26.819871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:68336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.233 [2024-10-16 09:30:26.819910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:21.233 [2024-10-16 09:30:26.819931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:68344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.233 [2024-10-16 09:30:26.819946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:21.233 [2024-10-16 09:30:26.819965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:68352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.233 [2024-10-16 09:30:26.819980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:21.233 [2024-10-16 09:30:26.820001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:68360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.233 [2024-10-16 09:30:26.820015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:21.234 [2024-10-16 09:30:26.820035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:68368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.234 [2024-10-16 09:30:26.820052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:21.234 [2024-10-16 09:30:26.820087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:68376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.234 [2024-10-16 09:30:26.820101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:21.234 [2024-10-16 09:30:26.820121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:68384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.234 [2024-10-16 09:30:26.820135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:21.234 [2024-10-16 09:30:26.820162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:68392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.234 [2024-10-16 09:30:26.820177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:21.234 [2024-10-16 09:30:26.820196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:67744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.234 [2024-10-16 09:30:26.820211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:21.234 [2024-10-16 09:30:26.820231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.234 [2024-10-16 09:30:26.820245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:21.234 [2024-10-16 09:30:26.820265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:67760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.234 [2024-10-16 09:30:26.820279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:21.234 [2024-10-16 09:30:26.820298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:67768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.234 [2024-10-16 09:30:26.820313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:21.234 [2024-10-16 09:30:26.820332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:67776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.234 [2024-10-16 09:30:26.820353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:21.234 [2024-10-16 09:30:26.820374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:67784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.234 [2024-10-16 09:30:26.820389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:21.234 [2024-10-16 09:30:26.820408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:67792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.234 [2024-10-16 09:30:26.820423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:21.234 [2024-10-16 09:30:26.820442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:67800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.234 [2024-10-16 09:30:26.820456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:21.234 [2024-10-16 09:30:26.820476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:67808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.234 [2024-10-16 09:30:26.820490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:21.234 [2024-10-16 09:30:26.820509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:67816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.234 [2024-10-16 09:30:26.820523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:21.234 [2024-10-16 09:30:26.820543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:67824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.234 [2024-10-16 09:30:26.820574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:21.234 [2024-10-16 09:30:26.820593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:67832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.234 [2024-10-16 09:30:26.820619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:21.234 [2024-10-16 09:30:26.820657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:67840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.234 [2024-10-16 09:30:26.820672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:21.234 [2024-10-16 09:30:26.820693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:67848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.234 [2024-10-16 09:30:26.820708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:21.234 [2024-10-16 09:30:26.820728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:67856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.234 [2024-10-16 09:30:26.820743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:21.234 [2024-10-16 09:30:26.821519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:67864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.234 [2024-10-16 09:30:26.821548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:21.234 [2024-10-16 09:30:26.821614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:68400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.234 [2024-10-16 09:30:26.821634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:21.234 [2024-10-16 09:30:26.821675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:68408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.234 [2024-10-16 09:30:26.821707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:21.234 [2024-10-16 09:30:26.821734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:68416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.234 [2024-10-16 09:30:26.821750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:21.234 [2024-10-16 09:30:26.821776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:68424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.234 [2024-10-16 09:30:26.821791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:21.234 [2024-10-16 09:30:26.821817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:68432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.234 [2024-10-16 09:30:26.821833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:21.234 [2024-10-16 09:30:26.821874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:68440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.234 [2024-10-16 09:30:26.821889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:21.234 [2024-10-16 09:30:26.821915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:68448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.234 [2024-10-16 09:30:26.821930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:21.234 [2024-10-16 09:30:26.822026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:68456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.234 [2024-10-16 09:30:26.822064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:21.234 [2024-10-16 09:30:26.822092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:68464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.234 [2024-10-16 09:30:26.822108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:21.234 [2024-10-16 09:30:26.822133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:68472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.234 [2024-10-16 09:30:26.822148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:21.234 [2024-10-16 09:30:26.822173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:68480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.234 [2024-10-16 09:30:26.822188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:21.234 [2024-10-16 09:30:26.822213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:68488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.234 [2024-10-16 09:30:26.822228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:21.234 [2024-10-16 09:30:26.822253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:68496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.234 [2024-10-16 09:30:26.822267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:21.234 [2024-10-16 09:30:26.822303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:68504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.234 [2024-10-16 09:30:26.822319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:21.234 [2024-10-16 09:30:26.822345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.234 [2024-10-16 09:30:26.822359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:21.234 [2024-10-16 09:30:26.822385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:67880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.234 [2024-10-16 09:30:26.822400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:21.234 [2024-10-16 09:30:26.822425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:67888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.234 [2024-10-16 09:30:26.822440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:21.234 [2024-10-16 09:30:26.822465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:67896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.234 [2024-10-16 09:30:26.822480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:21.235 [2024-10-16 09:30:26.822505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:67904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.235 [2024-10-16 09:30:26.822519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:21.235 [2024-10-16 09:30:26.822544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:67912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.235 [2024-10-16 09:30:26.822575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:21.235 [2024-10-16 09:30:26.822601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:67920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.235 [2024-10-16 09:30:26.822628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:21.235 [2024-10-16 09:30:26.822674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:67928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.235 [2024-10-16 09:30:26.822690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:21.235 9300.53 IOPS, 36.33 MiB/s [2024-10-16T09:30:45.639Z] 8719.25 IOPS, 34.06 MiB/s [2024-10-16T09:30:45.639Z] 8206.35 IOPS, 32.06 MiB/s [2024-10-16T09:30:45.639Z] 7750.44 IOPS, 30.28 MiB/s [2024-10-16T09:30:45.639Z] 7618.05 IOPS, 29.76 MiB/s [2024-10-16T09:30:45.639Z] 7751.90 IOPS, 30.28 MiB/s [2024-10-16T09:30:45.639Z] 7883.57 IOPS, 30.80 MiB/s [2024-10-16T09:30:45.639Z] 8178.00 IOPS, 31.95 MiB/s [2024-10-16T09:30:45.639Z] 8414.91 IOPS, 32.87 MiB/s [2024-10-16T09:30:45.639Z] 8643.62 IOPS, 33.76 MiB/s [2024-10-16T09:30:45.639Z] 8725.12 IOPS, 34.08 MiB/s [2024-10-16T09:30:45.639Z] 8782.46 IOPS, 34.31 MiB/s [2024-10-16T09:30:45.639Z] 8831.11 IOPS, 34.50 MiB/s [2024-10-16T09:30:45.639Z] 8937.07 IOPS, 34.91 MiB/s [2024-10-16T09:30:45.639Z] 9010.97 IOPS, 35.20 MiB/s [2024-10-16T09:30:45.639Z] 9171.30 IOPS, 35.83 MiB/s [2024-10-16T09:30:45.639Z] [2024-10-16 09:30:42.723781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:18120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.235 [2024-10-16 09:30:42.723846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:21.235 [2024-10-16 09:30:42.723895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:18136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.235 [2024-10-16 09:30:42.723937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:21.235 [2024-10-16 09:30:42.723960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.235 [2024-10-16 09:30:42.723974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:21.235 [2024-10-16 09:30:42.723994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.235 [2024-10-16 09:30:42.724007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:21.235 [2024-10-16 09:30:42.724026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.235 [2024-10-16 09:30:42.724040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:21.235 [2024-10-16 09:30:42.724059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.235 [2024-10-16 09:30:42.724079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:21.235 [2024-10-16 09:30:42.724097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.235 [2024-10-16 09:30:42.724111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:21.235 [2024-10-16 09:30:42.724129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:17920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.235 [2024-10-16 09:30:42.724143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:21.235 [2024-10-16 09:30:42.724162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.235 [2024-10-16 09:30:42.724175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:21.235 [2024-10-16 09:30:42.724194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.235 [2024-10-16 09:30:42.724208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:21.235 [2024-10-16 09:30:42.724226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.235 [2024-10-16 09:30:42.724240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:21.235 [2024-10-16 09:30:42.724259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:18144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.235 [2024-10-16 09:30:42.724272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:21.235 [2024-10-16 09:30:42.724291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:18160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.235 [2024-10-16 09:30:42.724304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:21.235 [2024-10-16 09:30:42.724323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:18176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.235 [2024-10-16 09:30:42.724336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:21.235 [2024-10-16 09:30:42.724365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.235 [2024-10-16 09:30:42.724380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:21.235 [2024-10-16 09:30:42.724399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:18208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.235 [2024-10-16 09:30:42.724413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:21.235 [2024-10-16 09:30:42.724432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.235 [2024-10-16 09:30:42.724446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:21.235 [2024-10-16 09:30:42.724467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.235 [2024-10-16 09:30:42.724481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:21.235 [2024-10-16 09:30:42.724499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:18256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.235 [2024-10-16 09:30:42.724513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:21.235 [2024-10-16 09:30:42.724532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.235 [2024-10-16 09:30:42.724546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:21.235 [2024-10-16 09:30:42.724596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:17624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.235 [2024-10-16 09:30:42.724612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:21.235 [2024-10-16 09:30:42.724632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.235 [2024-10-16 09:30:42.724646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:21.235 [2024-10-16 09:30:42.724666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.235 [2024-10-16 09:30:42.724680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:21.235 [2024-10-16 09:30:42.724699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.235 [2024-10-16 09:30:42.724713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:21.235 [2024-10-16 09:30:42.724733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.236 [2024-10-16 09:30:42.724747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:21.236 [2024-10-16 09:30:42.724766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.236 [2024-10-16 09:30:42.724780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:21.236 [2024-10-16 09:30:42.724809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:18280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.236 [2024-10-16 09:30:42.724825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:21.236 [2024-10-16 09:30:42.724845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.236 [2024-10-16 09:30:42.724860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:21.236 [2024-10-16 09:30:42.724879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.236 [2024-10-16 09:30:42.724894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:21.236 [2024-10-16 09:30:42.724914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.236 [2024-10-16 09:30:42.724929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:21.236 [2024-10-16 09:30:42.724948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:17840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.236 [2024-10-16 09:30:42.724977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:21.236 [2024-10-16 09:30:42.724996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.236 [2024-10-16 09:30:42.725010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:21.236 [2024-10-16 09:30:42.725029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:18312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.236 [2024-10-16 09:30:42.725043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:21.236 [2024-10-16 09:30:42.725062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:18328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.236 [2024-10-16 09:30:42.725076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:21.236 [2024-10-16 09:30:42.725095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:17904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.236 [2024-10-16 09:30:42.725109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:21.236 [2024-10-16 09:30:42.725132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.236 [2024-10-16 09:30:42.725147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:21.236 [2024-10-16 09:30:42.725167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:18368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.236 [2024-10-16 09:30:42.725181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:21.236 [2024-10-16 09:30:42.725200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.236 [2024-10-16 09:30:42.725214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:21.236 [2024-10-16 09:30:42.725232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:18400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.236 [2024-10-16 09:30:42.725254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:21.236 [2024-10-16 09:30:42.725301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:18416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.236 [2024-10-16 09:30:42.725334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:21.236 [2024-10-16 09:30:42.725355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.236 [2024-10-16 09:30:42.725369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:21.236 [2024-10-16 09:30:42.725390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.236 [2024-10-16 09:30:42.725404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:21.236 [2024-10-16 09:30:42.725424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:18464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.236 [2024-10-16 09:30:42.725439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:21.236 [2024-10-16 09:30:42.725459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:17976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.236 [2024-10-16 09:30:42.725474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:21.236 [2024-10-16 09:30:42.725495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:18008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.236 [2024-10-16 09:30:42.725509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:21.236 [2024-10-16 09:30:42.725529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:18040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.236 [2024-10-16 09:30:42.725544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:21.236 [2024-10-16 09:30:42.725564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:18072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.236 [2024-10-16 09:30:42.725592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:21.236 [2024-10-16 09:30:42.725614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.236 [2024-10-16 09:30:42.725629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:21.236 [2024-10-16 09:30:42.725649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:18480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.236 [2024-10-16 09:30:42.725664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:21.236 [2024-10-16 09:30:42.725684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.236 [2024-10-16 09:30:42.725699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:21.236 [2024-10-16 09:30:42.725733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:17928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.236 [2024-10-16 09:30:42.725757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:21.236 [2024-10-16 09:30:42.727046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:18152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.237 [2024-10-16 09:30:42.727074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:21.237 [2024-10-16 09:30:42.727099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.237 [2024-10-16 09:30:42.727116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:21.237 [2024-10-16 09:30:42.727135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:18216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.237 [2024-10-16 09:30:42.727149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:21.237 [2024-10-16 09:30:42.727168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:18504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.237 [2024-10-16 09:30:42.727182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:21.237 [2024-10-16 09:30:42.727201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.237 [2024-10-16 09:30:42.727215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:21.237 [2024-10-16 09:30:42.727234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:18536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.237 [2024-10-16 09:30:42.727248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:21.237 [2024-10-16 09:30:42.727267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.237 [2024-10-16 09:30:42.727281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:21.237 [2024-10-16 09:30:42.727300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.237 [2024-10-16 09:30:42.727314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:21.237 [2024-10-16 09:30:42.727332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:18248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.237 [2024-10-16 09:30:42.727347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:21.237 [2024-10-16 09:30:42.727366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:18592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.237 [2024-10-16 09:30:42.727380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:21.237 [2024-10-16 09:30:42.727400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:18608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.237 [2024-10-16 09:30:42.727414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:21.237 [2024-10-16 09:30:42.727449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.237 [2024-10-16 09:30:42.727468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:21.237 [2024-10-16 09:30:42.727501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.237 [2024-10-16 09:30:42.727517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:21.237 [2024-10-16 09:30:42.727550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:18656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.237 [2024-10-16 09:30:42.727568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:21.237 [2024-10-16 09:30:42.727588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.237 [2024-10-16 09:30:42.727602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:21.237 [2024-10-16 09:30:42.727622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:17952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.237 [2024-10-16 09:30:42.727636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:21.237 [2024-10-16 09:30:42.727655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:17984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.237 [2024-10-16 09:30:42.727669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:21.237 [2024-10-16 09:30:42.727688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.237 [2024-10-16 09:30:42.727701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:21.237 [2024-10-16 09:30:42.727720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.237 [2024-10-16 09:30:42.727734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:21.237 [2024-10-16 09:30:42.727753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:18080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.237 [2024-10-16 09:30:42.727766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:21.237 [2024-10-16 09:30:42.727785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.237 [2024-10-16 09:30:42.727799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:21.237 [2024-10-16 09:30:42.727818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:18696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.237 [2024-10-16 09:30:42.727831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:21.237 [2024-10-16 09:30:42.727851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.237 [2024-10-16 09:30:42.727864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:21.237 [2024-10-16 09:30:42.727883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.237 [2024-10-16 09:30:42.727897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:21.237 [2024-10-16 09:30:42.727924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.237 [2024-10-16 09:30:42.727939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:21.237 [2024-10-16 09:30:42.727958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.237 [2024-10-16 09:30:42.727972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:21.237 [2024-10-16 09:30:42.727991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:17512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.237 [2024-10-16 09:30:42.728005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:21.237 [2024-10-16 09:30:42.728024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.237 [2024-10-16 09:30:42.728038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:21.237 [2024-10-16 09:30:42.728058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:18176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.237 [2024-10-16 09:30:42.728072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:21.237 [2024-10-16 09:30:42.728095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.237 [2024-10-16 09:30:42.728109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:21.237 [2024-10-16 09:30:42.728129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:18240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.237 [2024-10-16 09:30:42.728143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:21.237 [2024-10-16 09:30:42.728162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.237 [2024-10-16 09:30:42.728175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:21.237 [2024-10-16 09:30:42.728194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.237 [2024-10-16 09:30:42.728208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:21.237 [2024-10-16 09:30:42.728227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.237 [2024-10-16 09:30:42.728240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:21.237 [2024-10-16 09:30:42.728259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:18264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.237 [2024-10-16 09:30:42.728273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:21.237 [2024-10-16 09:30:42.728291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.237 [2024-10-16 09:30:42.728305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:21.237 [2024-10-16 09:30:42.728332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.237 [2024-10-16 09:30:42.728347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:21.237 [2024-10-16 09:30:42.728365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.237 [2024-10-16 09:30:42.728379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:21.237 [2024-10-16 09:30:42.728397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.237 [2024-10-16 09:30:42.728411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:21.237 [2024-10-16 09:30:42.728430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:18352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.238 [2024-10-16 09:30:42.728443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:21.238 [2024-10-16 09:30:42.728462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:18384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.238 [2024-10-16 09:30:42.728477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:21.238 [2024-10-16 09:30:42.729811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:18416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.238 [2024-10-16 09:30:42.729841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:21.238 [2024-10-16 09:30:42.729868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:18448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.238 [2024-10-16 09:30:42.729900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:21.238 [2024-10-16 09:30:42.729935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.238 [2024-10-16 09:30:42.729949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:21.238 [2024-10-16 09:30:42.729968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:18040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.238 [2024-10-16 09:30:42.729982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:21.238 [2024-10-16 09:30:42.730001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.238 [2024-10-16 09:30:42.730015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:21.238 [2024-10-16 09:30:42.730034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:18496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.238 [2024-10-16 09:30:42.730048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:21.238 [2024-10-16 09:30:42.730067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.238 [2024-10-16 09:30:42.730081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:21.238 [2024-10-16 09:30:42.730100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.238 [2024-10-16 09:30:42.730126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:21.238 [2024-10-16 09:30:42.730147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:18336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.238 [2024-10-16 09:30:42.730161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:21.238 [2024-10-16 09:30:42.730180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:18720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.238 [2024-10-16 09:30:42.730194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:21.238 [2024-10-16 09:30:42.730213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.238 [2024-10-16 09:30:42.730226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:21.238 [2024-10-16 09:30:42.730245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:18752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.238 [2024-10-16 09:30:42.730259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:21.238 [2024-10-16 09:30:42.730278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.238 [2024-10-16 09:30:42.730292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:21.238 [2024-10-16 09:30:42.730311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:18784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.238 [2024-10-16 09:30:42.730325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:21.238 [2024-10-16 09:30:42.730344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.238 [2024-10-16 09:30:42.730358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:21.238 [2024-10-16 09:30:42.730377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:18376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.238 [2024-10-16 09:30:42.730391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:21.238 [2024-10-16 09:30:42.730410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.238 [2024-10-16 09:30:42.730424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:21.238 [2024-10-16 09:30:42.730442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.238 [2024-10-16 09:30:42.730456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:21.238 [2024-10-16 09:30:42.730475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:18184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.238 [2024-10-16 09:30:42.730489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:21.238 [2024-10-16 09:30:42.730525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:18504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.238 [2024-10-16 09:30:42.730552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:21.238 [2024-10-16 09:30:42.730589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.238 [2024-10-16 09:30:42.730608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:21.238 [2024-10-16 09:30:42.730627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.238 [2024-10-16 09:30:42.730641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:21.238 [2024-10-16 09:30:42.730660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:18592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.238 [2024-10-16 09:30:42.730675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:21.238 [2024-10-16 09:30:42.730694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.238 [2024-10-16 09:30:42.730708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:21.238 [2024-10-16 09:30:42.730727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:18656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.238 [2024-10-16 09:30:42.730741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:21.238 [2024-10-16 09:30:42.730760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.238 [2024-10-16 09:30:42.730774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:21.238 [2024-10-16 09:30:42.730793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:18016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.238 [2024-10-16 09:30:42.730806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:21.238 [2024-10-16 09:30:42.730825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.238 [2024-10-16 09:30:42.730839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:21.238 [2024-10-16 09:30:42.730858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:18696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.238 [2024-10-16 09:30:42.730872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:21.238 [2024-10-16 09:30:42.730900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:17808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.238 [2024-10-16 09:30:42.730914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:21.238 [2024-10-16 09:30:42.730933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:17920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.238 [2024-10-16 09:30:42.730947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:21.238 [2024-10-16 09:30:42.730966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.238 [2024-10-16 09:30:42.730980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:21.238 [2024-10-16 09:30:42.731011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:18208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.238 [2024-10-16 09:30:42.731027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:21.238 [2024-10-16 09:30:42.731046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.238 [2024-10-16 09:30:42.731060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:21.238 [2024-10-16 09:30:42.731078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.238 [2024-10-16 09:30:42.731092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:21.238 [2024-10-16 09:30:42.731111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.238 [2024-10-16 09:30:42.731125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:21.239 [2024-10-16 09:30:42.731144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:17872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.239 [2024-10-16 09:30:42.731157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:21.239 [2024-10-16 09:30:42.731176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:18352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.239 [2024-10-16 09:30:42.731190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:21.239 [2024-10-16 09:30:42.732757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.239 [2024-10-16 09:30:42.732786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:21.239 [2024-10-16 09:30:42.732812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:18800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.239 [2024-10-16 09:30:42.732828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:21.239 [2024-10-16 09:30:42.732847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.239 [2024-10-16 09:30:42.732861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:21.239 [2024-10-16 09:30:42.732880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:18832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.239 [2024-10-16 09:30:42.732894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:21.239 [2024-10-16 09:30:42.732914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:18848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.239 [2024-10-16 09:30:42.732927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:21.239 [2024-10-16 09:30:42.732946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:18528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.239 [2024-10-16 09:30:42.732960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:21.239 [2024-10-16 09:30:42.732991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.239 [2024-10-16 09:30:42.733007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:21.239 [2024-10-16 09:30:42.733031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.239 [2024-10-16 09:30:42.733045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:21.239 [2024-10-16 09:30:42.733064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.239 [2024-10-16 09:30:42.733078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:21.239 [2024-10-16 09:30:42.733097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.239 [2024-10-16 09:30:42.733110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:21.239 [2024-10-16 09:30:42.733129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:18632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.239 [2024-10-16 09:30:42.733143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:21.239 [2024-10-16 09:30:42.733161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:18664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.239 [2024-10-16 09:30:42.733175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:21.239 [2024-10-16 09:30:42.733194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:18448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.239 [2024-10-16 09:30:42.733208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:21.239 [2024-10-16 09:30:42.733227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.239 [2024-10-16 09:30:42.733241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:21.239 [2024-10-16 09:30:42.733263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.239 [2024-10-16 09:30:42.733306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:21.239 [2024-10-16 09:30:42.733329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.239 [2024-10-16 09:30:42.733345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:21.239 [2024-10-16 09:30:42.733366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.239 [2024-10-16 09:30:42.733386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:21.239 [2024-10-16 09:30:42.733409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:18752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.239 [2024-10-16 09:30:42.733425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:21.239 [2024-10-16 09:30:42.733445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.239 [2024-10-16 09:30:42.733485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:21.239 [2024-10-16 09:30:42.733509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:18376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.239 [2024-10-16 09:30:42.733525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:21.239 [2024-10-16 09:30:42.733547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.239 [2024-10-16 09:30:42.733577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:21.239 [2024-10-16 09:30:42.733601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:18504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.239 [2024-10-16 09:30:42.733617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:21.239 [2024-10-16 09:30:42.733638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.239 [2024-10-16 09:30:42.733654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:21.239 [2024-10-16 09:30:42.733681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.239 [2024-10-16 09:30:42.733711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:21.239 [2024-10-16 09:30:42.733740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.239 [2024-10-16 09:30:42.733769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:21.239 [2024-10-16 09:30:42.733790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.239 [2024-10-16 09:30:42.733804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:21.239 [2024-10-16 09:30:42.733824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:17808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.239 [2024-10-16 09:30:42.733838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:21.239 [2024-10-16 09:30:42.733858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:18144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.239 [2024-10-16 09:30:42.733873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:21.239 [2024-10-16 09:30:42.733893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.239 [2024-10-16 09:30:42.733923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:21.239 [2024-10-16 09:30:42.733959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.239 [2024-10-16 09:30:42.733978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:21.239 [2024-10-16 09:30:42.734000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:18352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.239 [2024-10-16 09:30:42.734024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:21.239 [2024-10-16 09:30:42.734045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:18704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.239 [2024-10-16 09:30:42.734060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:21.239 [2024-10-16 09:30:42.734079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.239 [2024-10-16 09:30:42.734099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:21.239 [2024-10-16 09:30:42.734134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:18896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.239 [2024-10-16 09:30:42.734148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:21.239 [2024-10-16 09:30:42.734167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.240 [2024-10-16 09:30:42.734180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:21.240 [2024-10-16 09:30:42.734199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.240 [2024-10-16 09:30:42.734213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:21.240 [2024-10-16 09:30:42.734232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:18944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.240 [2024-10-16 09:30:42.734245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:21.240 [2024-10-16 09:30:42.734265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.240 [2024-10-16 09:30:42.734279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:21.240 [2024-10-16 09:30:42.734298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:18224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.240 [2024-10-16 09:30:42.734311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:21.240 [2024-10-16 09:30:42.734335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.240 [2024-10-16 09:30:42.734349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:21.240 [2024-10-16 09:30:42.734368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.240 [2024-10-16 09:30:42.734382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:21.240 [2024-10-16 09:30:42.734401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:18968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.240 [2024-10-16 09:30:42.734414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:21.240 [2024-10-16 09:30:42.734433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:18984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.240 [2024-10-16 09:30:42.734447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:21.240 [2024-10-16 09:30:42.734477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.240 [2024-10-16 09:30:42.734492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:21.240 [2024-10-16 09:30:42.734511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.240 [2024-10-16 09:30:42.734526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:21.240 [2024-10-16 09:30:42.736478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.240 [2024-10-16 09:30:42.736507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:21.240 [2024-10-16 09:30:42.736532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.240 [2024-10-16 09:30:42.736548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:21.240 [2024-10-16 09:30:42.736597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.240 [2024-10-16 09:30:42.736616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:21.240 [2024-10-16 09:30:42.736637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:19024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.240 [2024-10-16 09:30:42.736661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:21.240 [2024-10-16 09:30:42.736682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.240 [2024-10-16 09:30:42.736697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:21.240 [2024-10-16 09:30:42.736716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.240 [2024-10-16 09:30:42.736731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:21.240 [2024-10-16 09:30:42.736750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:18800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.240 [2024-10-16 09:30:42.736764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:21.240 [2024-10-16 09:30:42.736783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.240 [2024-10-16 09:30:42.736798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:21.240 [2024-10-16 09:30:42.736817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.240 [2024-10-16 09:30:42.736831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:21.240 [2024-10-16 09:30:42.736851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.240 [2024-10-16 09:30:42.736865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:21.240 [2024-10-16 09:30:42.736912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.240 [2024-10-16 09:30:42.736928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:21.240 [2024-10-16 09:30:42.736947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.240 [2024-10-16 09:30:42.736961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:21.240 [2024-10-16 09:30:42.736980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.240 [2024-10-16 09:30:42.736993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:21.240 [2024-10-16 09:30:42.737011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:18304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.240 [2024-10-16 09:30:42.737025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:21.240 [2024-10-16 09:30:42.737044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:18752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.240 [2024-10-16 09:30:42.737058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:21.240 [2024-10-16 09:30:42.737092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:18376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.240 [2024-10-16 09:30:42.737107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:21.240 [2024-10-16 09:30:42.737129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.240 [2024-10-16 09:30:42.737145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:21.240 [2024-10-16 09:30:42.737164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:18624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.240 [2024-10-16 09:30:42.737178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:21.240 [2024-10-16 09:30:42.737197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:18080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.240 [2024-10-16 09:30:42.737211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:21.240 [2024-10-16 09:30:42.737230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:18144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.240 [2024-10-16 09:30:42.737249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:21.240 [2024-10-16 09:30:42.737295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:18296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.240 [2024-10-16 09:30:42.737313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:21.240 [2024-10-16 09:30:42.737335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:18704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.240 [2024-10-16 09:30:42.737351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:21.240 [2024-10-16 09:30:42.737372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.240 [2024-10-16 09:30:42.737396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:21.240 [2024-10-16 09:30:42.737419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:18928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.240 [2024-10-16 09:30:42.737435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:21.240 [2024-10-16 09:30:42.737456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.240 [2024-10-16 09:30:42.737472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:21.240 [2024-10-16 09:30:42.737493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:18280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.240 [2024-10-16 09:30:42.737508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:21.240 [2024-10-16 09:30:42.737530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.241 [2024-10-16 09:30:42.737545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:21.241 [2024-10-16 09:30:42.737579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.241 [2024-10-16 09:30:42.737596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:21.241 [2024-10-16 09:30:42.737618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.241 [2024-10-16 09:30:42.737648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:21.241 [2024-10-16 09:30:42.737669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.241 [2024-10-16 09:30:42.737700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:21.241 [2024-10-16 09:30:42.737740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:18672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.241 [2024-10-16 09:30:42.737754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:21.241 [2024-10-16 09:30:42.737775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.241 [2024-10-16 09:30:42.737789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:21.241 [2024-10-16 09:30:42.737810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.241 [2024-10-16 09:30:42.737841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:21.241 [2024-10-16 09:30:42.737860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.241 [2024-10-16 09:30:42.737874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:21.241 [2024-10-16 09:30:42.737894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.241 [2024-10-16 09:30:42.737916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:21.241 [2024-10-16 09:30:42.737937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.241 [2024-10-16 09:30:42.737957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:21.241 [2024-10-16 09:30:42.737978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.241 [2024-10-16 09:30:42.737992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:21.241 [2024-10-16 09:30:42.738012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.241 [2024-10-16 09:30:42.738026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:21.241 [2024-10-16 09:30:42.738046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.241 [2024-10-16 09:30:42.738060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:21.241 [2024-10-16 09:30:42.738079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:18384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.241 [2024-10-16 09:30:42.738094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:21.241 [2024-10-16 09:30:42.738113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.241 [2024-10-16 09:30:42.738142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:21.241 [2024-10-16 09:30:42.738162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.241 [2024-10-16 09:30:42.738176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:21.241 [2024-10-16 09:30:42.738196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.241 [2024-10-16 09:30:42.738210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:21.241 [2024-10-16 09:30:42.739991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.241 [2024-10-16 09:30:42.740017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:21.241 [2024-10-16 09:30:42.740041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.241 [2024-10-16 09:30:42.740056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:21.241 [2024-10-16 09:30:42.740074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.241 [2024-10-16 09:30:42.740088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:21.241 [2024-10-16 09:30:42.740106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.241 [2024-10-16 09:30:42.740120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:21.241 [2024-10-16 09:30:42.740149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.241 [2024-10-16 09:30:42.740164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:21.241 [2024-10-16 09:30:42.740183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.241 [2024-10-16 09:30:42.740196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:21.241 [2024-10-16 09:30:42.740215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.241 [2024-10-16 09:30:42.740228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:21.241 [2024-10-16 09:30:42.740246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.241 [2024-10-16 09:30:42.740260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:21.241 [2024-10-16 09:30:42.740278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:18752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.241 [2024-10-16 09:30:42.740292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:21.241 [2024-10-16 09:30:42.740311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:18504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.241 [2024-10-16 09:30:42.740324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:21.241 [2024-10-16 09:30:42.740343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.241 [2024-10-16 09:30:42.740356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:21.241 [2024-10-16 09:30:42.740375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.241 [2024-10-16 09:30:42.740388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:21.241 [2024-10-16 09:30:42.740406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.241 [2024-10-16 09:30:42.740419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:21.241 [2024-10-16 09:30:42.740437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.241 [2024-10-16 09:30:42.740451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:21.241 [2024-10-16 09:30:42.740469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.241 [2024-10-16 09:30:42.740482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:21.241 [2024-10-16 09:30:42.740500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:18520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.241 [2024-10-16 09:30:42.740513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:21.241 [2024-10-16 09:30:42.740539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:18672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.241 [2024-10-16 09:30:42.740569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:21.242 [2024-10-16 09:30:42.740607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.242 [2024-10-16 09:30:42.740635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:21.242 [2024-10-16 09:30:42.740658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.242 [2024-10-16 09:30:42.740673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:21.242 [2024-10-16 09:30:42.740692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.242 [2024-10-16 09:30:42.740707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:21.242 [2024-10-16 09:30:42.740726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.242 [2024-10-16 09:30:42.740741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:21.242 [2024-10-16 09:30:42.740761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.242 [2024-10-16 09:30:42.740775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:21.242 [2024-10-16 09:30:42.740795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.242 [2024-10-16 09:30:42.740810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:21.242 [2024-10-16 09:30:42.741746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.242 [2024-10-16 09:30:42.741775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:21.242 [2024-10-16 09:30:42.741810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.242 [2024-10-16 09:30:42.741845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:21.242 [2024-10-16 09:30:42.741880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:19200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.242 [2024-10-16 09:30:42.741910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:21.242 [2024-10-16 09:30:42.741928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.242 [2024-10-16 09:30:42.741941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:21.242 [2024-10-16 09:30:42.741959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.242 [2024-10-16 09:30:42.741972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:21.242 [2024-10-16 09:30:42.741990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.242 [2024-10-16 09:30:42.742015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:21.242 [2024-10-16 09:30:42.742035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.242 [2024-10-16 09:30:42.742049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:21.242 [2024-10-16 09:30:42.742067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.242 [2024-10-16 09:30:42.742081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:21.242 [2024-10-16 09:30:42.742099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.242 [2024-10-16 09:30:42.742112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:21.242 [2024-10-16 09:30:42.742130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:18208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.242 [2024-10-16 09:30:42.742143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:21.242 [2024-10-16 09:30:42.742162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.242 [2024-10-16 09:30:42.742176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:21.242 [2024-10-16 09:30:42.743191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:19280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.242 [2024-10-16 09:30:42.743217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:21.242 [2024-10-16 09:30:42.743241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.242 [2024-10-16 09:30:42.743256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:21.242 [2024-10-16 09:30:42.743275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.242 [2024-10-16 09:30:42.743288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:21.242 [2024-10-16 09:30:42.743307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.242 [2024-10-16 09:30:42.743320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:21.242 [2024-10-16 09:30:42.743338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.242 [2024-10-16 09:30:42.743351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:21.242 [2024-10-16 09:30:42.743369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.242 [2024-10-16 09:30:42.743382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:21.242 [2024-10-16 09:30:42.743401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.242 [2024-10-16 09:30:42.743425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:21.242 [2024-10-16 09:30:42.743445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.242 [2024-10-16 09:30:42.743460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:21.242 [2024-10-16 09:30:42.743478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.242 [2024-10-16 09:30:42.743492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:21.242 [2024-10-16 09:30:42.743510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:18296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.242 [2024-10-16 09:30:42.743523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:21.242 [2024-10-16 09:30:42.743541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:18160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.242 [2024-10-16 09:30:42.743588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:21.242 [2024-10-16 09:30:42.743608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:18520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.242 [2024-10-16 09:30:42.743637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:21.242 [2024-10-16 09:30:42.743659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.242 [2024-10-16 09:30:42.743674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:21.242 [2024-10-16 09:30:42.743694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.242 [2024-10-16 09:30:42.743708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:21.242 [2024-10-16 09:30:42.743728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.242 [2024-10-16 09:30:42.743742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:21.242 [2024-10-16 09:30:42.743762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.242 [2024-10-16 09:30:42.743776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:21.242 [2024-10-16 09:30:42.743796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.242 [2024-10-16 09:30:42.743809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:21.242 [2024-10-16 09:30:42.743829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.242 [2024-10-16 09:30:42.743844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:21.242 [2024-10-16 09:30:42.743863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.243 [2024-10-16 09:30:42.743902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:21.243 [2024-10-16 09:30:42.743952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.243 [2024-10-16 09:30:42.743967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:21.243 9261.68 IOPS, 36.18 MiB/s [2024-10-16T09:30:45.647Z] 9302.50 IOPS, 36.34 MiB/s [2024-10-16T09:30:45.647Z] 9338.67 IOPS, 36.48 MiB/s [2024-10-16T09:30:45.647Z] Received shutdown signal, test time was about 33.019178 seconds 00:16:21.243 00:16:21.243 Latency(us) 00:16:21.243 [2024-10-16T09:30:45.647Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:21.243 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:21.243 Verification LBA range: start 0x0 length 0x4000 00:16:21.243 Nvme0n1 : 33.02 9337.36 36.47 0.00 0.00 13680.97 629.29 4026531.84 00:16:21.243 [2024-10-16T09:30:45.647Z] =================================================================================================================== 00:16:21.243 [2024-10-16T09:30:45.647Z] Total : 9337.36 36.47 0.00 0.00 13680.97 629.29 4026531.84 00:16:21.243 09:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:21.509 09:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:16:21.510 09:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:21.510 09:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:16:21.510 09:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # nvmfcleanup 00:16:21.510 09:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:16:21.510 09:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:21.510 09:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:16:21.510 09:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:21.510 09:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:21.510 rmmod nvme_tcp 00:16:21.510 rmmod nvme_fabrics 00:16:21.510 rmmod nvme_keyring 00:16:21.769 09:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:21.769 09:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:16:21.769 09:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:16:21.769 09:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@515 -- # '[' -n 75848 ']' 00:16:21.769 09:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # killprocess 75848 00:16:21.769 09:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 75848 ']' 00:16:21.769 09:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 75848 00:16:21.769 09:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:16:21.769 09:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:21.769 09:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75848 00:16:21.769 09:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:21.769 09:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:21.769 killing process with pid 75848 00:16:21.769 09:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75848' 00:16:21.769 09:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 75848 00:16:21.769 09:30:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 75848 00:16:21.769 09:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:16:21.769 09:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:16:21.769 09:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:16:21.769 09:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:16:21.769 09:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-save 00:16:21.769 09:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-restore 00:16:21.769 09:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:16:21.769 09:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:21.769 09:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:21.769 09:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:22.028 09:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:22.028 09:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:22.028 09:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:22.028 09:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:22.028 09:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:22.028 09:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:22.028 09:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:22.028 09:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:22.028 09:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:22.028 09:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:22.028 09:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:22.028 09:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:22.028 09:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:22.028 09:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:22.028 09:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:22.028 09:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:22.028 09:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:16:22.028 00:16:22.028 real 0m38.733s 00:16:22.028 user 2m5.336s 00:16:22.028 sys 0m11.122s 00:16:22.028 09:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:22.028 09:30:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:22.028 ************************************ 00:16:22.028 END TEST nvmf_host_multipath_status 00:16:22.028 ************************************ 00:16:22.288 09:30:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:22.288 09:30:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:22.288 09:30:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:22.288 09:30:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:22.288 ************************************ 00:16:22.288 START TEST nvmf_discovery_remove_ifc 00:16:22.288 ************************************ 00:16:22.288 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:22.288 * Looking for test storage... 00:16:22.288 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:22.288 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:22.288 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:22.288 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:16:22.288 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:22.288 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:22.288 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:22.288 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:22.288 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:16:22.288 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:16:22.288 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:16:22.288 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:22.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.289 --rc genhtml_branch_coverage=1 00:16:22.289 --rc genhtml_function_coverage=1 00:16:22.289 --rc genhtml_legend=1 00:16:22.289 --rc geninfo_all_blocks=1 00:16:22.289 --rc geninfo_unexecuted_blocks=1 00:16:22.289 00:16:22.289 ' 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:22.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.289 --rc genhtml_branch_coverage=1 00:16:22.289 --rc genhtml_function_coverage=1 00:16:22.289 --rc genhtml_legend=1 00:16:22.289 --rc geninfo_all_blocks=1 00:16:22.289 --rc geninfo_unexecuted_blocks=1 00:16:22.289 00:16:22.289 ' 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:22.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.289 --rc genhtml_branch_coverage=1 00:16:22.289 --rc genhtml_function_coverage=1 00:16:22.289 --rc genhtml_legend=1 00:16:22.289 --rc geninfo_all_blocks=1 00:16:22.289 --rc geninfo_unexecuted_blocks=1 00:16:22.289 00:16:22.289 ' 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:22.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.289 --rc genhtml_branch_coverage=1 00:16:22.289 --rc genhtml_function_coverage=1 00:16:22.289 --rc genhtml_legend=1 00:16:22.289 --rc geninfo_all_blocks=1 00:16:22.289 --rc geninfo_unexecuted_blocks=1 00:16:22.289 00:16:22.289 ' 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5989d9e2-d339-420e-a2f4-bd87604f111f 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:22.289 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:16:22.289 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:16:22.290 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:16:22.290 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:22.290 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:16:22.290 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:16:22.290 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:16:22.290 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:16:22.290 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:22.290 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:22.290 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:22.290 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:22.290 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:22.290 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:22.290 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:22.290 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:16:22.290 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:16:22.290 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:16:22.290 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:16:22.290 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:16:22.290 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@458 -- # nvmf_veth_init 00:16:22.290 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:22.290 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:22.290 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:22.290 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:22.290 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:22.290 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:22.290 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:22.290 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:22.290 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:22.290 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:22.290 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:22.290 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:22.290 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:22.290 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:22.290 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:22.290 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:22.290 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:22.290 Cannot find device "nvmf_init_br" 00:16:22.290 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:16:22.290 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:22.290 Cannot find device "nvmf_init_br2" 00:16:22.290 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:16:22.290 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:22.549 Cannot find device "nvmf_tgt_br" 00:16:22.549 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:16:22.549 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:22.549 Cannot find device "nvmf_tgt_br2" 00:16:22.549 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:16:22.549 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:22.549 Cannot find device "nvmf_init_br" 00:16:22.549 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:16:22.549 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:22.549 Cannot find device "nvmf_init_br2" 00:16:22.549 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:16:22.549 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:22.549 Cannot find device "nvmf_tgt_br" 00:16:22.549 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:16:22.549 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:22.549 Cannot find device "nvmf_tgt_br2" 00:16:22.549 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:16:22.549 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:22.549 Cannot find device "nvmf_br" 00:16:22.549 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:16:22.549 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:22.549 Cannot find device "nvmf_init_if" 00:16:22.549 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:16:22.549 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:22.549 Cannot find device "nvmf_init_if2" 00:16:22.549 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:16:22.549 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:22.549 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:22.549 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:16:22.549 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:22.549 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:22.549 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:16:22.549 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:22.549 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:22.549 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:22.549 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:22.549 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:22.549 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:22.549 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:22.549 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:22.550 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:22.550 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:22.550 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:22.550 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:22.550 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:22.550 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:22.550 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:22.550 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:22.550 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:22.550 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:22.550 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:22.550 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:22.550 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:22.550 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:22.550 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:22.550 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:22.809 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:22.809 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:22.809 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:22.809 09:30:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:22.809 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:22.809 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:22.809 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:22.809 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:22.809 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:22.809 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:22.809 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:16:22.809 00:16:22.809 --- 10.0.0.3 ping statistics --- 00:16:22.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.809 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:16:22.809 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:22.809 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:22.809 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:16:22.809 00:16:22.809 --- 10.0.0.4 ping statistics --- 00:16:22.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.809 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:16:22.809 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:22.809 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:22.809 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:16:22.809 00:16:22.809 --- 10.0.0.1 ping statistics --- 00:16:22.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.809 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:16:22.809 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:22.809 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:22.809 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:16:22.809 00:16:22.809 --- 10.0.0.2 ping statistics --- 00:16:22.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.809 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:16:22.809 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:22.809 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # return 0 00:16:22.809 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:22.809 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:22.809 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:16:22.809 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:16:22.809 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:22.809 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:16:22.809 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:16:22.810 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:16:22.810 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:22.810 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:22.810 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:22.810 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # nvmfpid=76735 00:16:22.810 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # waitforlisten 76735 00:16:22.810 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 76735 ']' 00:16:22.810 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:22.810 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.810 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:22.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:22.810 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.810 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:22.810 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:22.810 [2024-10-16 09:30:47.110690] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:16:22.810 [2024-10-16 09:30:47.110775] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:23.068 [2024-10-16 09:30:47.252378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.068 [2024-10-16 09:30:47.304142] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:23.068 [2024-10-16 09:30:47.304214] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:23.068 [2024-10-16 09:30:47.304228] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:23.068 [2024-10-16 09:30:47.304238] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:23.068 [2024-10-16 09:30:47.304248] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:23.068 [2024-10-16 09:30:47.304713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:23.068 [2024-10-16 09:30:47.361153] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:23.068 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:23.068 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:16:23.068 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:23.068 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:23.068 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:23.068 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:23.068 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:16:23.068 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.068 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:23.326 [2024-10-16 09:30:47.481653] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:23.326 [2024-10-16 09:30:47.489808] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:16:23.326 null0 00:16:23.326 [2024-10-16 09:30:47.521725] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:23.326 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.326 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=76764 00:16:23.326 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:16:23.326 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 76764 /tmp/host.sock 00:16:23.326 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 76764 ']' 00:16:23.326 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:16:23.326 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:23.326 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:23.326 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:23.326 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:23.326 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:23.326 [2024-10-16 09:30:47.602486] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:16:23.326 [2024-10-16 09:30:47.602621] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76764 ] 00:16:23.586 [2024-10-16 09:30:47.744100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.586 [2024-10-16 09:30:47.798489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.586 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:23.586 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:16:23.586 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:23.586 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:16:23.586 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.586 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:23.586 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.586 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:16:23.586 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.586 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:23.586 [2024-10-16 09:30:47.924022] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:23.586 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.586 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:16:23.586 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.586 09:30:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:24.965 [2024-10-16 09:30:48.984604] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:16:24.965 [2024-10-16 09:30:48.984648] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:16:24.965 [2024-10-16 09:30:48.984663] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:24.965 [2024-10-16 09:30:48.990639] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:16:24.965 [2024-10-16 09:30:49.048490] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:24.965 [2024-10-16 09:30:49.048577] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:24.965 [2024-10-16 09:30:49.048614] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:24.965 [2024-10-16 09:30:49.048629] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:16:24.965 [2024-10-16 09:30:49.048647] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:16:24.965 09:30:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.965 09:30:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:16:24.965 09:30:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:24.965 [2024-10-16 09:30:49.053893] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x987400 was disconnected and freed. delete nvme_qpair. 00:16:24.965 09:30:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:24.965 09:30:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.965 09:30:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:24.965 09:30:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:24.965 09:30:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:24.965 09:30:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:24.965 09:30:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.965 09:30:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:16:24.965 09:30:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:16:24.965 09:30:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:16:24.965 09:30:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:16:24.965 09:30:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:24.965 09:30:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:24.965 09:30:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:24.966 09:30:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:24.966 09:30:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.966 09:30:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:24.966 09:30:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:24.966 09:30:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.966 09:30:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:24.966 09:30:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:25.903 09:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:25.903 09:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:25.903 09:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:25.903 09:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:25.903 09:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.903 09:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:25.903 09:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:25.903 09:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.903 09:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:25.903 09:30:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:26.840 09:30:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:26.840 09:30:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:26.840 09:30:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.840 09:30:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:26.840 09:30:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:27.098 09:30:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:27.098 09:30:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:27.098 09:30:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.098 09:30:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:27.098 09:30:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:28.036 09:30:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:28.036 09:30:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:28.036 09:30:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:28.036 09:30:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.036 09:30:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:28.036 09:30:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:28.036 09:30:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:28.036 09:30:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.036 09:30:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:28.036 09:30:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:29.034 09:30:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:29.035 09:30:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:29.035 09:30:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:29.035 09:30:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.035 09:30:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:29.035 09:30:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:29.035 09:30:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:29.035 09:30:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.035 09:30:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:29.035 09:30:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:30.412 09:30:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:30.413 09:30:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:30.413 09:30:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:30.413 09:30:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.413 09:30:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:30.413 09:30:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:30.413 09:30:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:30.413 09:30:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.413 09:30:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:30.413 09:30:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:30.413 [2024-10-16 09:30:54.475506] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:16:30.413 [2024-10-16 09:30:54.475618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:30.413 [2024-10-16 09:30:54.475635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:30.413 [2024-10-16 09:30:54.475648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:30.413 [2024-10-16 09:30:54.475658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:30.413 [2024-10-16 09:30:54.475668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:30.413 [2024-10-16 09:30:54.475677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:30.413 [2024-10-16 09:30:54.475687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:30.413 [2024-10-16 09:30:54.475696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:30.413 [2024-10-16 09:30:54.475706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:30.413 [2024-10-16 09:30:54.475715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:30.413 [2024-10-16 09:30:54.475724] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95af70 is same with the state(6) to be set 00:16:30.413 [2024-10-16 09:30:54.485502] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95af70 (9): Bad file descriptor 00:16:30.413 [2024-10-16 09:30:54.495518] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:31.351 09:30:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:31.351 09:30:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:31.351 09:30:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.351 09:30:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:31.351 09:30:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:31.351 09:30:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:31.351 09:30:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:31.351 [2024-10-16 09:30:55.534643] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:16:31.351 [2024-10-16 09:30:55.534944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95af70 with addr=10.0.0.3, port=4420 00:16:31.351 [2024-10-16 09:30:55.535187] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95af70 is same with the state(6) to be set 00:16:31.351 [2024-10-16 09:30:55.535387] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95af70 (9): Bad file descriptor 00:16:31.351 [2024-10-16 09:30:55.536103] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:31.351 [2024-10-16 09:30:55.536184] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:16:31.351 [2024-10-16 09:30:55.536207] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:16:31.351 [2024-10-16 09:30:55.536226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:16:31.351 [2024-10-16 09:30:55.536258] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:31.351 [2024-10-16 09:30:55.536276] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:31.351 09:30:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.351 09:30:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:31.351 09:30:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:32.292 [2024-10-16 09:30:56.536319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:16:32.292 [2024-10-16 09:30:56.536351] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:16:32.292 [2024-10-16 09:30:56.536377] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:16:32.292 [2024-10-16 09:30:56.536385] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:16:32.292 [2024-10-16 09:30:56.536401] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:32.292 [2024-10-16 09:30:56.536425] bdev_nvme.c:6904:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:16:32.292 [2024-10-16 09:30:56.536453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:32.292 [2024-10-16 09:30:56.536467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.292 [2024-10-16 09:30:56.536479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:32.292 [2024-10-16 09:30:56.536486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.292 [2024-10-16 09:30:56.536495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:32.292 [2024-10-16 09:30:56.536502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.292 [2024-10-16 09:30:56.536510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:32.292 [2024-10-16 09:30:56.536534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.292 [2024-10-16 09:30:56.536543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:32.292 [2024-10-16 09:30:56.536550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.292 [2024-10-16 09:30:56.536558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:16:32.292 [2024-10-16 09:30:56.536620] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8efd70 (9): Bad file descriptor 00:16:32.292 [2024-10-16 09:30:56.537608] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:16:32.292 [2024-10-16 09:30:56.537629] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:16:32.292 09:30:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:32.292 09:30:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:32.292 09:30:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.292 09:30:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:32.292 09:30:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:32.292 09:30:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:32.292 09:30:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:32.292 09:30:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.292 09:30:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:16:32.292 09:30:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:32.292 09:30:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:32.292 09:30:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:16:32.292 09:30:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:32.292 09:30:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:32.292 09:30:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:32.292 09:30:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.292 09:30:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:32.292 09:30:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:32.292 09:30:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:32.292 09:30:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.292 09:30:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:16:32.292 09:30:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:33.668 09:30:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:33.668 09:30:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:33.668 09:30:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.668 09:30:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:33.668 09:30:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:33.668 09:30:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:33.668 09:30:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:33.668 09:30:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.668 09:30:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:16:33.668 09:30:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:34.234 [2024-10-16 09:30:58.548597] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:16:34.234 [2024-10-16 09:30:58.548765] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:16:34.234 [2024-10-16 09:30:58.548795] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:34.234 [2024-10-16 09:30:58.554629] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:16:34.234 [2024-10-16 09:30:58.611741] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:34.234 [2024-10-16 09:30:58.611949] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:34.234 [2024-10-16 09:30:58.611984] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:34.234 [2024-10-16 09:30:58.612001] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:16:34.234 [2024-10-16 09:30:58.612009] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:16:34.234 [2024-10-16 09:30:58.617638] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x994670 was disconnected and freed. delete nvme_qpair. 00:16:34.493 09:30:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:34.493 09:30:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:34.493 09:30:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:34.493 09:30:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.493 09:30:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:34.493 09:30:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:34.493 09:30:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:34.493 09:30:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.493 09:30:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:16:34.493 09:30:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:16:34.493 09:30:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 76764 00:16:34.493 09:30:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 76764 ']' 00:16:34.493 09:30:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 76764 00:16:34.493 09:30:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:16:34.493 09:30:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:34.493 09:30:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76764 00:16:34.493 killing process with pid 76764 00:16:34.493 09:30:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:34.493 09:30:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:34.493 09:30:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76764' 00:16:34.493 09:30:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 76764 00:16:34.493 09:30:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 76764 00:16:34.752 09:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:16:34.752 09:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # nvmfcleanup 00:16:34.752 09:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:16:34.752 09:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:34.752 09:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:16:34.752 09:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:34.752 09:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:34.752 rmmod nvme_tcp 00:16:34.752 rmmod nvme_fabrics 00:16:34.752 rmmod nvme_keyring 00:16:34.752 09:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:34.752 09:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:16:34.752 09:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:16:34.752 09:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@515 -- # '[' -n 76735 ']' 00:16:34.752 09:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # killprocess 76735 00:16:34.752 09:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 76735 ']' 00:16:34.752 09:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 76735 00:16:34.752 09:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:16:34.752 09:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:34.752 09:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76735 00:16:35.011 killing process with pid 76735 00:16:35.011 09:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:35.011 09:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:35.011 09:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76735' 00:16:35.011 09:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 76735 00:16:35.011 09:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 76735 00:16:35.011 09:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:16:35.011 09:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:16:35.011 09:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:16:35.011 09:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:16:35.011 09:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-save 00:16:35.011 09:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:16:35.011 09:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-restore 00:16:35.011 09:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:35.011 09:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:35.011 09:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:35.011 09:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:35.011 09:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:35.011 09:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:35.270 09:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:35.270 09:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:35.270 09:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:35.270 09:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:35.270 09:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:35.270 09:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:35.270 09:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:35.270 09:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:35.270 09:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:35.270 09:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:35.270 09:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:35.270 09:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:35.270 09:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:35.270 09:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:16:35.270 00:16:35.270 real 0m13.144s 00:16:35.270 user 0m22.288s 00:16:35.270 sys 0m2.488s 00:16:35.270 09:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:35.270 09:30:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:35.270 ************************************ 00:16:35.270 END TEST nvmf_discovery_remove_ifc 00:16:35.270 ************************************ 00:16:35.271 09:30:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:16:35.271 09:30:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:35.271 09:30:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:35.271 09:30:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.271 ************************************ 00:16:35.271 START TEST nvmf_identify_kernel_target 00:16:35.271 ************************************ 00:16:35.271 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:16:35.530 * Looking for test storage... 00:16:35.530 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:35.530 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:35.530 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:16:35.530 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:35.530 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:35.530 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:35.530 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:35.530 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:35.530 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:35.530 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:35.530 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:35.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.531 --rc genhtml_branch_coverage=1 00:16:35.531 --rc genhtml_function_coverage=1 00:16:35.531 --rc genhtml_legend=1 00:16:35.531 --rc geninfo_all_blocks=1 00:16:35.531 --rc geninfo_unexecuted_blocks=1 00:16:35.531 00:16:35.531 ' 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:35.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.531 --rc genhtml_branch_coverage=1 00:16:35.531 --rc genhtml_function_coverage=1 00:16:35.531 --rc genhtml_legend=1 00:16:35.531 --rc geninfo_all_blocks=1 00:16:35.531 --rc geninfo_unexecuted_blocks=1 00:16:35.531 00:16:35.531 ' 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:35.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.531 --rc genhtml_branch_coverage=1 00:16:35.531 --rc genhtml_function_coverage=1 00:16:35.531 --rc genhtml_legend=1 00:16:35.531 --rc geninfo_all_blocks=1 00:16:35.531 --rc geninfo_unexecuted_blocks=1 00:16:35.531 00:16:35.531 ' 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:35.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.531 --rc genhtml_branch_coverage=1 00:16:35.531 --rc genhtml_function_coverage=1 00:16:35.531 --rc genhtml_legend=1 00:16:35.531 --rc geninfo_all_blocks=1 00:16:35.531 --rc geninfo_unexecuted_blocks=1 00:16:35.531 00:16:35.531 ' 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5989d9e2-d339-420e-a2f4-bd87604f111f 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:35.531 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # nvmf_veth_init 00:16:35.531 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:35.532 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:35.532 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:35.532 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:35.532 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:35.532 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:35.532 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:35.532 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:35.532 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:35.532 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:35.532 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:35.532 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:35.532 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:35.532 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:35.532 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:35.532 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:35.532 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:35.532 Cannot find device "nvmf_init_br" 00:16:35.532 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:16:35.532 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:35.532 Cannot find device "nvmf_init_br2" 00:16:35.532 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:16:35.532 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:35.532 Cannot find device "nvmf_tgt_br" 00:16:35.532 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:16:35.532 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:35.532 Cannot find device "nvmf_tgt_br2" 00:16:35.532 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:16:35.532 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:35.532 Cannot find device "nvmf_init_br" 00:16:35.532 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:16:35.532 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:35.532 Cannot find device "nvmf_init_br2" 00:16:35.532 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:16:35.532 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:35.790 Cannot find device "nvmf_tgt_br" 00:16:35.791 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:16:35.791 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:35.791 Cannot find device "nvmf_tgt_br2" 00:16:35.791 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:16:35.791 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:35.791 Cannot find device "nvmf_br" 00:16:35.791 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:16:35.791 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:35.791 Cannot find device "nvmf_init_if" 00:16:35.791 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:16:35.791 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:35.791 Cannot find device "nvmf_init_if2" 00:16:35.791 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:16:35.791 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:35.791 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:35.791 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:16:35.791 09:30:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:35.791 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:35.791 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:16:35.791 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:35.791 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:35.791 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:35.791 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:35.791 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:35.791 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:35.791 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:35.791 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:35.791 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:35.791 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:35.791 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:35.791 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:35.791 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:35.791 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:35.791 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:35.791 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:35.791 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:35.791 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:35.791 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:35.791 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:35.791 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:35.791 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:35.791 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:36.064 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:36.064 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:36.064 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:36.064 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:36.064 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:36.064 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:36.064 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:36.064 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:36.064 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:36.064 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:36.064 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:36.064 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:16:36.064 00:16:36.064 --- 10.0.0.3 ping statistics --- 00:16:36.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.064 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:16:36.064 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:36.064 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:36.065 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.067 ms 00:16:36.065 00:16:36.065 --- 10.0.0.4 ping statistics --- 00:16:36.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.065 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:16:36.065 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:36.065 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:36.065 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:16:36.065 00:16:36.065 --- 10.0.0.1 ping statistics --- 00:16:36.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.065 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:16:36.065 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:36.065 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:36.065 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:16:36.065 00:16:36.065 --- 10.0.0.2 ping statistics --- 00:16:36.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.065 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:16:36.065 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:36.065 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # return 0 00:16:36.065 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:36.065 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:36.065 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:16:36.065 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:16:36.065 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:36.065 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:16:36.065 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:16:36.065 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:16:36.065 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:16:36.065 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@767 -- # local ip 00:16:36.065 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates=() 00:16:36.065 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # local -A ip_candidates 00:16:36.065 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:36.065 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:36.065 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:16:36.065 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:36.065 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:16:36.065 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:16:36.065 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:16:36.065 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:16:36.065 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:16:36.065 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:16:36.065 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:16:36.065 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:36.065 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:36.065 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:16:36.065 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # local block nvme 00:16:36.065 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:16:36.065 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # modprobe nvmet 00:16:36.065 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:16:36.065 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:36.334 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:36.334 Waiting for block devices as requested 00:16:36.334 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:36.593 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:36.593 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:16:36.593 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:36.593 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:16:36.593 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:16:36.593 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:36.593 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:36.593 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:16:36.593 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:16:36.593 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:16:36.593 No valid GPT data, bailing 00:16:36.593 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:36.593 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:16:36.593 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:16:36.593 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:16:36.593 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:16:36.593 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n2 ]] 00:16:36.593 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme0n2 00:16:36.593 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:16:36.593 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:16:36.593 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:36.593 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme0n2 00:16:36.593 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:16:36.593 09:31:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:16:36.852 No valid GPT data, bailing 00:16:36.852 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:16:36.852 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:16:36.852 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:16:36.852 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n2 00:16:36.852 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:16:36.852 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n3 ]] 00:16:36.852 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme0n3 00:16:36.852 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:16:36.852 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:16:36.852 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:36.852 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme0n3 00:16:36.852 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:16:36.852 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:16:36.852 No valid GPT data, bailing 00:16:36.852 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:16:36.852 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:16:36.852 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:16:36.852 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n3 00:16:36.852 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:16:36.852 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme1n1 ]] 00:16:36.852 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme1n1 00:16:36.852 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:16:36.852 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:36.852 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:36.852 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme1n1 00:16:36.852 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:16:36.852 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:16:36.852 No valid GPT data, bailing 00:16:36.852 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:16:36.852 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:16:36.852 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:16:36.852 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme1n1 00:16:36.852 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # [[ -b /dev/nvme1n1 ]] 00:16:36.852 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:36.852 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:36.852 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:16:36.852 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:16:36.852 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:16:36.852 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@694 -- # echo /dev/nvme1n1 00:16:36.852 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:16:36.852 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:16:36.852 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo tcp 00:16:36.852 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 4420 00:16:36.852 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo ipv4 00:16:36.852 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:16:36.852 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid=5989d9e2-d339-420e-a2f4-bd87604f111f -a 10.0.0.1 -t tcp -s 4420 00:16:37.111 00:16:37.111 Discovery Log Number of Records 2, Generation counter 2 00:16:37.111 =====Discovery Log Entry 0====== 00:16:37.111 trtype: tcp 00:16:37.111 adrfam: ipv4 00:16:37.111 subtype: current discovery subsystem 00:16:37.111 treq: not specified, sq flow control disable supported 00:16:37.111 portid: 1 00:16:37.111 trsvcid: 4420 00:16:37.111 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:37.111 traddr: 10.0.0.1 00:16:37.111 eflags: none 00:16:37.111 sectype: none 00:16:37.111 =====Discovery Log Entry 1====== 00:16:37.111 trtype: tcp 00:16:37.111 adrfam: ipv4 00:16:37.111 subtype: nvme subsystem 00:16:37.111 treq: not specified, sq flow control disable supported 00:16:37.111 portid: 1 00:16:37.111 trsvcid: 4420 00:16:37.111 subnqn: nqn.2016-06.io.spdk:testnqn 00:16:37.111 traddr: 10.0.0.1 00:16:37.111 eflags: none 00:16:37.111 sectype: none 00:16:37.111 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:16:37.111 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:16:37.111 ===================================================== 00:16:37.111 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:16:37.111 ===================================================== 00:16:37.111 Controller Capabilities/Features 00:16:37.111 ================================ 00:16:37.111 Vendor ID: 0000 00:16:37.111 Subsystem Vendor ID: 0000 00:16:37.111 Serial Number: 09ed77339c2918c633b5 00:16:37.111 Model Number: Linux 00:16:37.111 Firmware Version: 6.8.9-20 00:16:37.111 Recommended Arb Burst: 0 00:16:37.111 IEEE OUI Identifier: 00 00 00 00:16:37.111 Multi-path I/O 00:16:37.111 May have multiple subsystem ports: No 00:16:37.111 May have multiple controllers: No 00:16:37.111 Associated with SR-IOV VF: No 00:16:37.111 Max Data Transfer Size: Unlimited 00:16:37.111 Max Number of Namespaces: 0 00:16:37.111 Max Number of I/O Queues: 1024 00:16:37.111 NVMe Specification Version (VS): 1.3 00:16:37.111 NVMe Specification Version (Identify): 1.3 00:16:37.111 Maximum Queue Entries: 1024 00:16:37.111 Contiguous Queues Required: No 00:16:37.111 Arbitration Mechanisms Supported 00:16:37.111 Weighted Round Robin: Not Supported 00:16:37.111 Vendor Specific: Not Supported 00:16:37.111 Reset Timeout: 7500 ms 00:16:37.111 Doorbell Stride: 4 bytes 00:16:37.111 NVM Subsystem Reset: Not Supported 00:16:37.111 Command Sets Supported 00:16:37.111 NVM Command Set: Supported 00:16:37.111 Boot Partition: Not Supported 00:16:37.111 Memory Page Size Minimum: 4096 bytes 00:16:37.111 Memory Page Size Maximum: 4096 bytes 00:16:37.111 Persistent Memory Region: Not Supported 00:16:37.111 Optional Asynchronous Events Supported 00:16:37.111 Namespace Attribute Notices: Not Supported 00:16:37.111 Firmware Activation Notices: Not Supported 00:16:37.111 ANA Change Notices: Not Supported 00:16:37.111 PLE Aggregate Log Change Notices: Not Supported 00:16:37.112 LBA Status Info Alert Notices: Not Supported 00:16:37.112 EGE Aggregate Log Change Notices: Not Supported 00:16:37.112 Normal NVM Subsystem Shutdown event: Not Supported 00:16:37.112 Zone Descriptor Change Notices: Not Supported 00:16:37.112 Discovery Log Change Notices: Supported 00:16:37.112 Controller Attributes 00:16:37.112 128-bit Host Identifier: Not Supported 00:16:37.112 Non-Operational Permissive Mode: Not Supported 00:16:37.112 NVM Sets: Not Supported 00:16:37.112 Read Recovery Levels: Not Supported 00:16:37.112 Endurance Groups: Not Supported 00:16:37.112 Predictable Latency Mode: Not Supported 00:16:37.112 Traffic Based Keep ALive: Not Supported 00:16:37.112 Namespace Granularity: Not Supported 00:16:37.112 SQ Associations: Not Supported 00:16:37.112 UUID List: Not Supported 00:16:37.112 Multi-Domain Subsystem: Not Supported 00:16:37.112 Fixed Capacity Management: Not Supported 00:16:37.112 Variable Capacity Management: Not Supported 00:16:37.112 Delete Endurance Group: Not Supported 00:16:37.112 Delete NVM Set: Not Supported 00:16:37.112 Extended LBA Formats Supported: Not Supported 00:16:37.112 Flexible Data Placement Supported: Not Supported 00:16:37.112 00:16:37.112 Controller Memory Buffer Support 00:16:37.112 ================================ 00:16:37.112 Supported: No 00:16:37.112 00:16:37.112 Persistent Memory Region Support 00:16:37.112 ================================ 00:16:37.112 Supported: No 00:16:37.112 00:16:37.112 Admin Command Set Attributes 00:16:37.112 ============================ 00:16:37.112 Security Send/Receive: Not Supported 00:16:37.112 Format NVM: Not Supported 00:16:37.112 Firmware Activate/Download: Not Supported 00:16:37.112 Namespace Management: Not Supported 00:16:37.112 Device Self-Test: Not Supported 00:16:37.112 Directives: Not Supported 00:16:37.112 NVMe-MI: Not Supported 00:16:37.112 Virtualization Management: Not Supported 00:16:37.112 Doorbell Buffer Config: Not Supported 00:16:37.112 Get LBA Status Capability: Not Supported 00:16:37.112 Command & Feature Lockdown Capability: Not Supported 00:16:37.112 Abort Command Limit: 1 00:16:37.112 Async Event Request Limit: 1 00:16:37.112 Number of Firmware Slots: N/A 00:16:37.112 Firmware Slot 1 Read-Only: N/A 00:16:37.112 Firmware Activation Without Reset: N/A 00:16:37.112 Multiple Update Detection Support: N/A 00:16:37.112 Firmware Update Granularity: No Information Provided 00:16:37.112 Per-Namespace SMART Log: No 00:16:37.112 Asymmetric Namespace Access Log Page: Not Supported 00:16:37.112 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:16:37.112 Command Effects Log Page: Not Supported 00:16:37.112 Get Log Page Extended Data: Supported 00:16:37.112 Telemetry Log Pages: Not Supported 00:16:37.112 Persistent Event Log Pages: Not Supported 00:16:37.112 Supported Log Pages Log Page: May Support 00:16:37.112 Commands Supported & Effects Log Page: Not Supported 00:16:37.112 Feature Identifiers & Effects Log Page:May Support 00:16:37.112 NVMe-MI Commands & Effects Log Page: May Support 00:16:37.112 Data Area 4 for Telemetry Log: Not Supported 00:16:37.112 Error Log Page Entries Supported: 1 00:16:37.112 Keep Alive: Not Supported 00:16:37.112 00:16:37.112 NVM Command Set Attributes 00:16:37.112 ========================== 00:16:37.112 Submission Queue Entry Size 00:16:37.112 Max: 1 00:16:37.112 Min: 1 00:16:37.112 Completion Queue Entry Size 00:16:37.112 Max: 1 00:16:37.112 Min: 1 00:16:37.112 Number of Namespaces: 0 00:16:37.112 Compare Command: Not Supported 00:16:37.112 Write Uncorrectable Command: Not Supported 00:16:37.112 Dataset Management Command: Not Supported 00:16:37.112 Write Zeroes Command: Not Supported 00:16:37.112 Set Features Save Field: Not Supported 00:16:37.112 Reservations: Not Supported 00:16:37.112 Timestamp: Not Supported 00:16:37.112 Copy: Not Supported 00:16:37.112 Volatile Write Cache: Not Present 00:16:37.112 Atomic Write Unit (Normal): 1 00:16:37.112 Atomic Write Unit (PFail): 1 00:16:37.112 Atomic Compare & Write Unit: 1 00:16:37.112 Fused Compare & Write: Not Supported 00:16:37.112 Scatter-Gather List 00:16:37.112 SGL Command Set: Supported 00:16:37.112 SGL Keyed: Not Supported 00:16:37.112 SGL Bit Bucket Descriptor: Not Supported 00:16:37.112 SGL Metadata Pointer: Not Supported 00:16:37.112 Oversized SGL: Not Supported 00:16:37.112 SGL Metadata Address: Not Supported 00:16:37.112 SGL Offset: Supported 00:16:37.112 Transport SGL Data Block: Not Supported 00:16:37.112 Replay Protected Memory Block: Not Supported 00:16:37.112 00:16:37.112 Firmware Slot Information 00:16:37.112 ========================= 00:16:37.112 Active slot: 0 00:16:37.112 00:16:37.112 00:16:37.112 Error Log 00:16:37.112 ========= 00:16:37.112 00:16:37.112 Active Namespaces 00:16:37.112 ================= 00:16:37.112 Discovery Log Page 00:16:37.112 ================== 00:16:37.112 Generation Counter: 2 00:16:37.112 Number of Records: 2 00:16:37.112 Record Format: 0 00:16:37.112 00:16:37.112 Discovery Log Entry 0 00:16:37.112 ---------------------- 00:16:37.112 Transport Type: 3 (TCP) 00:16:37.112 Address Family: 1 (IPv4) 00:16:37.112 Subsystem Type: 3 (Current Discovery Subsystem) 00:16:37.112 Entry Flags: 00:16:37.112 Duplicate Returned Information: 0 00:16:37.112 Explicit Persistent Connection Support for Discovery: 0 00:16:37.112 Transport Requirements: 00:16:37.112 Secure Channel: Not Specified 00:16:37.112 Port ID: 1 (0x0001) 00:16:37.112 Controller ID: 65535 (0xffff) 00:16:37.112 Admin Max SQ Size: 32 00:16:37.112 Transport Service Identifier: 4420 00:16:37.112 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:16:37.112 Transport Address: 10.0.0.1 00:16:37.112 Discovery Log Entry 1 00:16:37.112 ---------------------- 00:16:37.112 Transport Type: 3 (TCP) 00:16:37.112 Address Family: 1 (IPv4) 00:16:37.112 Subsystem Type: 2 (NVM Subsystem) 00:16:37.112 Entry Flags: 00:16:37.112 Duplicate Returned Information: 0 00:16:37.112 Explicit Persistent Connection Support for Discovery: 0 00:16:37.112 Transport Requirements: 00:16:37.112 Secure Channel: Not Specified 00:16:37.112 Port ID: 1 (0x0001) 00:16:37.112 Controller ID: 65535 (0xffff) 00:16:37.112 Admin Max SQ Size: 32 00:16:37.112 Transport Service Identifier: 4420 00:16:37.112 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:16:37.112 Transport Address: 10.0.0.1 00:16:37.112 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:16:37.372 get_feature(0x01) failed 00:16:37.372 get_feature(0x02) failed 00:16:37.372 get_feature(0x04) failed 00:16:37.372 ===================================================== 00:16:37.372 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:16:37.372 ===================================================== 00:16:37.372 Controller Capabilities/Features 00:16:37.372 ================================ 00:16:37.372 Vendor ID: 0000 00:16:37.372 Subsystem Vendor ID: 0000 00:16:37.372 Serial Number: d1bb96dc4e0efd90f222 00:16:37.372 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:16:37.372 Firmware Version: 6.8.9-20 00:16:37.372 Recommended Arb Burst: 6 00:16:37.372 IEEE OUI Identifier: 00 00 00 00:16:37.372 Multi-path I/O 00:16:37.372 May have multiple subsystem ports: Yes 00:16:37.372 May have multiple controllers: Yes 00:16:37.372 Associated with SR-IOV VF: No 00:16:37.372 Max Data Transfer Size: Unlimited 00:16:37.372 Max Number of Namespaces: 1024 00:16:37.372 Max Number of I/O Queues: 128 00:16:37.372 NVMe Specification Version (VS): 1.3 00:16:37.372 NVMe Specification Version (Identify): 1.3 00:16:37.372 Maximum Queue Entries: 1024 00:16:37.372 Contiguous Queues Required: No 00:16:37.372 Arbitration Mechanisms Supported 00:16:37.372 Weighted Round Robin: Not Supported 00:16:37.372 Vendor Specific: Not Supported 00:16:37.372 Reset Timeout: 7500 ms 00:16:37.372 Doorbell Stride: 4 bytes 00:16:37.372 NVM Subsystem Reset: Not Supported 00:16:37.372 Command Sets Supported 00:16:37.372 NVM Command Set: Supported 00:16:37.372 Boot Partition: Not Supported 00:16:37.372 Memory Page Size Minimum: 4096 bytes 00:16:37.372 Memory Page Size Maximum: 4096 bytes 00:16:37.372 Persistent Memory Region: Not Supported 00:16:37.372 Optional Asynchronous Events Supported 00:16:37.372 Namespace Attribute Notices: Supported 00:16:37.372 Firmware Activation Notices: Not Supported 00:16:37.372 ANA Change Notices: Supported 00:16:37.372 PLE Aggregate Log Change Notices: Not Supported 00:16:37.372 LBA Status Info Alert Notices: Not Supported 00:16:37.372 EGE Aggregate Log Change Notices: Not Supported 00:16:37.372 Normal NVM Subsystem Shutdown event: Not Supported 00:16:37.372 Zone Descriptor Change Notices: Not Supported 00:16:37.372 Discovery Log Change Notices: Not Supported 00:16:37.372 Controller Attributes 00:16:37.372 128-bit Host Identifier: Supported 00:16:37.372 Non-Operational Permissive Mode: Not Supported 00:16:37.372 NVM Sets: Not Supported 00:16:37.372 Read Recovery Levels: Not Supported 00:16:37.372 Endurance Groups: Not Supported 00:16:37.372 Predictable Latency Mode: Not Supported 00:16:37.372 Traffic Based Keep ALive: Supported 00:16:37.372 Namespace Granularity: Not Supported 00:16:37.372 SQ Associations: Not Supported 00:16:37.372 UUID List: Not Supported 00:16:37.372 Multi-Domain Subsystem: Not Supported 00:16:37.372 Fixed Capacity Management: Not Supported 00:16:37.372 Variable Capacity Management: Not Supported 00:16:37.372 Delete Endurance Group: Not Supported 00:16:37.372 Delete NVM Set: Not Supported 00:16:37.372 Extended LBA Formats Supported: Not Supported 00:16:37.372 Flexible Data Placement Supported: Not Supported 00:16:37.372 00:16:37.372 Controller Memory Buffer Support 00:16:37.372 ================================ 00:16:37.372 Supported: No 00:16:37.372 00:16:37.372 Persistent Memory Region Support 00:16:37.372 ================================ 00:16:37.372 Supported: No 00:16:37.372 00:16:37.372 Admin Command Set Attributes 00:16:37.372 ============================ 00:16:37.372 Security Send/Receive: Not Supported 00:16:37.372 Format NVM: Not Supported 00:16:37.372 Firmware Activate/Download: Not Supported 00:16:37.372 Namespace Management: Not Supported 00:16:37.372 Device Self-Test: Not Supported 00:16:37.372 Directives: Not Supported 00:16:37.372 NVMe-MI: Not Supported 00:16:37.372 Virtualization Management: Not Supported 00:16:37.372 Doorbell Buffer Config: Not Supported 00:16:37.372 Get LBA Status Capability: Not Supported 00:16:37.372 Command & Feature Lockdown Capability: Not Supported 00:16:37.372 Abort Command Limit: 4 00:16:37.372 Async Event Request Limit: 4 00:16:37.372 Number of Firmware Slots: N/A 00:16:37.372 Firmware Slot 1 Read-Only: N/A 00:16:37.372 Firmware Activation Without Reset: N/A 00:16:37.372 Multiple Update Detection Support: N/A 00:16:37.372 Firmware Update Granularity: No Information Provided 00:16:37.372 Per-Namespace SMART Log: Yes 00:16:37.372 Asymmetric Namespace Access Log Page: Supported 00:16:37.373 ANA Transition Time : 10 sec 00:16:37.373 00:16:37.373 Asymmetric Namespace Access Capabilities 00:16:37.373 ANA Optimized State : Supported 00:16:37.373 ANA Non-Optimized State : Supported 00:16:37.373 ANA Inaccessible State : Supported 00:16:37.373 ANA Persistent Loss State : Supported 00:16:37.373 ANA Change State : Supported 00:16:37.373 ANAGRPID is not changed : No 00:16:37.373 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:16:37.373 00:16:37.373 ANA Group Identifier Maximum : 128 00:16:37.373 Number of ANA Group Identifiers : 128 00:16:37.373 Max Number of Allowed Namespaces : 1024 00:16:37.373 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:16:37.373 Command Effects Log Page: Supported 00:16:37.373 Get Log Page Extended Data: Supported 00:16:37.373 Telemetry Log Pages: Not Supported 00:16:37.373 Persistent Event Log Pages: Not Supported 00:16:37.373 Supported Log Pages Log Page: May Support 00:16:37.373 Commands Supported & Effects Log Page: Not Supported 00:16:37.373 Feature Identifiers & Effects Log Page:May Support 00:16:37.373 NVMe-MI Commands & Effects Log Page: May Support 00:16:37.373 Data Area 4 for Telemetry Log: Not Supported 00:16:37.373 Error Log Page Entries Supported: 128 00:16:37.373 Keep Alive: Supported 00:16:37.373 Keep Alive Granularity: 1000 ms 00:16:37.373 00:16:37.373 NVM Command Set Attributes 00:16:37.373 ========================== 00:16:37.373 Submission Queue Entry Size 00:16:37.373 Max: 64 00:16:37.373 Min: 64 00:16:37.373 Completion Queue Entry Size 00:16:37.373 Max: 16 00:16:37.373 Min: 16 00:16:37.373 Number of Namespaces: 1024 00:16:37.373 Compare Command: Not Supported 00:16:37.373 Write Uncorrectable Command: Not Supported 00:16:37.373 Dataset Management Command: Supported 00:16:37.373 Write Zeroes Command: Supported 00:16:37.373 Set Features Save Field: Not Supported 00:16:37.373 Reservations: Not Supported 00:16:37.373 Timestamp: Not Supported 00:16:37.373 Copy: Not Supported 00:16:37.373 Volatile Write Cache: Present 00:16:37.373 Atomic Write Unit (Normal): 1 00:16:37.373 Atomic Write Unit (PFail): 1 00:16:37.373 Atomic Compare & Write Unit: 1 00:16:37.373 Fused Compare & Write: Not Supported 00:16:37.373 Scatter-Gather List 00:16:37.373 SGL Command Set: Supported 00:16:37.373 SGL Keyed: Not Supported 00:16:37.373 SGL Bit Bucket Descriptor: Not Supported 00:16:37.373 SGL Metadata Pointer: Not Supported 00:16:37.373 Oversized SGL: Not Supported 00:16:37.373 SGL Metadata Address: Not Supported 00:16:37.373 SGL Offset: Supported 00:16:37.373 Transport SGL Data Block: Not Supported 00:16:37.373 Replay Protected Memory Block: Not Supported 00:16:37.373 00:16:37.373 Firmware Slot Information 00:16:37.373 ========================= 00:16:37.373 Active slot: 0 00:16:37.373 00:16:37.373 Asymmetric Namespace Access 00:16:37.373 =========================== 00:16:37.373 Change Count : 0 00:16:37.373 Number of ANA Group Descriptors : 1 00:16:37.373 ANA Group Descriptor : 0 00:16:37.373 ANA Group ID : 1 00:16:37.373 Number of NSID Values : 1 00:16:37.373 Change Count : 0 00:16:37.373 ANA State : 1 00:16:37.373 Namespace Identifier : 1 00:16:37.373 00:16:37.373 Commands Supported and Effects 00:16:37.373 ============================== 00:16:37.373 Admin Commands 00:16:37.373 -------------- 00:16:37.373 Get Log Page (02h): Supported 00:16:37.373 Identify (06h): Supported 00:16:37.373 Abort (08h): Supported 00:16:37.373 Set Features (09h): Supported 00:16:37.373 Get Features (0Ah): Supported 00:16:37.373 Asynchronous Event Request (0Ch): Supported 00:16:37.373 Keep Alive (18h): Supported 00:16:37.373 I/O Commands 00:16:37.373 ------------ 00:16:37.373 Flush (00h): Supported 00:16:37.373 Write (01h): Supported LBA-Change 00:16:37.373 Read (02h): Supported 00:16:37.373 Write Zeroes (08h): Supported LBA-Change 00:16:37.373 Dataset Management (09h): Supported 00:16:37.373 00:16:37.373 Error Log 00:16:37.373 ========= 00:16:37.373 Entry: 0 00:16:37.373 Error Count: 0x3 00:16:37.373 Submission Queue Id: 0x0 00:16:37.373 Command Id: 0x5 00:16:37.373 Phase Bit: 0 00:16:37.373 Status Code: 0x2 00:16:37.373 Status Code Type: 0x0 00:16:37.373 Do Not Retry: 1 00:16:37.373 Error Location: 0x28 00:16:37.373 LBA: 0x0 00:16:37.373 Namespace: 0x0 00:16:37.373 Vendor Log Page: 0x0 00:16:37.373 ----------- 00:16:37.373 Entry: 1 00:16:37.373 Error Count: 0x2 00:16:37.373 Submission Queue Id: 0x0 00:16:37.373 Command Id: 0x5 00:16:37.373 Phase Bit: 0 00:16:37.373 Status Code: 0x2 00:16:37.373 Status Code Type: 0x0 00:16:37.373 Do Not Retry: 1 00:16:37.373 Error Location: 0x28 00:16:37.373 LBA: 0x0 00:16:37.373 Namespace: 0x0 00:16:37.373 Vendor Log Page: 0x0 00:16:37.373 ----------- 00:16:37.373 Entry: 2 00:16:37.373 Error Count: 0x1 00:16:37.373 Submission Queue Id: 0x0 00:16:37.373 Command Id: 0x4 00:16:37.373 Phase Bit: 0 00:16:37.373 Status Code: 0x2 00:16:37.373 Status Code Type: 0x0 00:16:37.373 Do Not Retry: 1 00:16:37.373 Error Location: 0x28 00:16:37.373 LBA: 0x0 00:16:37.373 Namespace: 0x0 00:16:37.373 Vendor Log Page: 0x0 00:16:37.373 00:16:37.373 Number of Queues 00:16:37.373 ================ 00:16:37.373 Number of I/O Submission Queues: 128 00:16:37.373 Number of I/O Completion Queues: 128 00:16:37.373 00:16:37.373 ZNS Specific Controller Data 00:16:37.373 ============================ 00:16:37.373 Zone Append Size Limit: 0 00:16:37.373 00:16:37.373 00:16:37.373 Active Namespaces 00:16:37.373 ================= 00:16:37.373 get_feature(0x05) failed 00:16:37.373 Namespace ID:1 00:16:37.373 Command Set Identifier: NVM (00h) 00:16:37.373 Deallocate: Supported 00:16:37.373 Deallocated/Unwritten Error: Not Supported 00:16:37.373 Deallocated Read Value: Unknown 00:16:37.373 Deallocate in Write Zeroes: Not Supported 00:16:37.373 Deallocated Guard Field: 0xFFFF 00:16:37.373 Flush: Supported 00:16:37.373 Reservation: Not Supported 00:16:37.373 Namespace Sharing Capabilities: Multiple Controllers 00:16:37.373 Size (in LBAs): 1310720 (5GiB) 00:16:37.373 Capacity (in LBAs): 1310720 (5GiB) 00:16:37.373 Utilization (in LBAs): 1310720 (5GiB) 00:16:37.373 UUID: 1c5c2f21-3aa3-42c6-a8a1-173c78f00f38 00:16:37.373 Thin Provisioning: Not Supported 00:16:37.373 Per-NS Atomic Units: Yes 00:16:37.373 Atomic Boundary Size (Normal): 0 00:16:37.373 Atomic Boundary Size (PFail): 0 00:16:37.373 Atomic Boundary Offset: 0 00:16:37.373 NGUID/EUI64 Never Reused: No 00:16:37.373 ANA group ID: 1 00:16:37.373 Namespace Write Protected: No 00:16:37.373 Number of LBA Formats: 1 00:16:37.373 Current LBA Format: LBA Format #00 00:16:37.373 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:16:37.373 00:16:37.373 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:16:37.373 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:16:37.373 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:16:37.373 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:37.373 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:16:37.373 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:37.373 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:37.373 rmmod nvme_tcp 00:16:37.373 rmmod nvme_fabrics 00:16:37.373 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:37.373 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:16:37.373 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:16:37.373 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:16:37.373 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:16:37.373 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:16:37.373 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:16:37.373 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:16:37.373 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-save 00:16:37.373 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:16:37.373 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-restore 00:16:37.373 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:37.373 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:37.373 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:37.373 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:37.373 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:37.373 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:37.632 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:37.632 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:37.632 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:37.632 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:37.632 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:37.632 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:37.632 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:37.632 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:37.632 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:37.632 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:37.632 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:37.632 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:37.632 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:37.632 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:16:37.632 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:16:37.632 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:16:37.632 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # echo 0 00:16:37.632 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:37.632 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:37.633 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:16:37.633 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:37.633 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:16:37.633 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:16:37.633 09:31:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@724 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:38.569 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:38.569 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:38.569 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:38.569 ************************************ 00:16:38.569 END TEST nvmf_identify_kernel_target 00:16:38.569 ************************************ 00:16:38.569 00:16:38.569 real 0m3.185s 00:16:38.569 user 0m1.135s 00:16:38.569 sys 0m1.422s 00:16:38.569 09:31:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:38.569 09:31:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.569 09:31:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:16:38.569 09:31:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:38.569 09:31:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:38.569 09:31:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.569 ************************************ 00:16:38.569 START TEST nvmf_auth_host 00:16:38.569 ************************************ 00:16:38.569 09:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:16:38.569 * Looking for test storage... 00:16:38.569 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:38.569 09:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:38.569 09:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:16:38.569 09:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:38.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.829 --rc genhtml_branch_coverage=1 00:16:38.829 --rc genhtml_function_coverage=1 00:16:38.829 --rc genhtml_legend=1 00:16:38.829 --rc geninfo_all_blocks=1 00:16:38.829 --rc geninfo_unexecuted_blocks=1 00:16:38.829 00:16:38.829 ' 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:38.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.829 --rc genhtml_branch_coverage=1 00:16:38.829 --rc genhtml_function_coverage=1 00:16:38.829 --rc genhtml_legend=1 00:16:38.829 --rc geninfo_all_blocks=1 00:16:38.829 --rc geninfo_unexecuted_blocks=1 00:16:38.829 00:16:38.829 ' 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:38.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.829 --rc genhtml_branch_coverage=1 00:16:38.829 --rc genhtml_function_coverage=1 00:16:38.829 --rc genhtml_legend=1 00:16:38.829 --rc geninfo_all_blocks=1 00:16:38.829 --rc geninfo_unexecuted_blocks=1 00:16:38.829 00:16:38.829 ' 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:38.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.829 --rc genhtml_branch_coverage=1 00:16:38.829 --rc genhtml_function_coverage=1 00:16:38.829 --rc genhtml_legend=1 00:16:38.829 --rc geninfo_all_blocks=1 00:16:38.829 --rc geninfo_unexecuted_blocks=1 00:16:38.829 00:16:38.829 ' 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5989d9e2-d339-420e-a2f4-bd87604f111f 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:38.829 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:16:38.829 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:16:38.830 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:38.830 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:16:38.830 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:16:38.830 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:16:38.830 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:16:38.830 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:16:38.830 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:38.830 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:38.830 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:38.830 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:38.830 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:38.830 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:38.830 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:38.830 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:16:38.830 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:16:38.830 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:16:38.830 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:16:38.830 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:16:38.830 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@458 -- # nvmf_veth_init 00:16:38.830 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:38.830 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:38.830 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:38.830 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:38.830 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:38.830 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:38.830 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:38.830 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:38.830 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:38.830 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:38.830 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:38.830 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:38.830 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:38.830 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:38.830 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:38.830 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:38.830 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:38.830 Cannot find device "nvmf_init_br" 00:16:38.830 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:16:38.830 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:38.830 Cannot find device "nvmf_init_br2" 00:16:38.830 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:16:38.830 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:38.830 Cannot find device "nvmf_tgt_br" 00:16:38.830 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:16:38.830 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:38.830 Cannot find device "nvmf_tgt_br2" 00:16:38.830 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:16:38.830 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:38.830 Cannot find device "nvmf_init_br" 00:16:38.830 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:16:38.830 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:38.830 Cannot find device "nvmf_init_br2" 00:16:38.830 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:16:38.830 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:38.830 Cannot find device "nvmf_tgt_br" 00:16:38.830 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:16:38.830 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:38.830 Cannot find device "nvmf_tgt_br2" 00:16:38.830 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:16:38.830 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:38.830 Cannot find device "nvmf_br" 00:16:38.830 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:16:38.830 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:38.830 Cannot find device "nvmf_init_if" 00:16:38.830 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:16:38.830 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:39.089 Cannot find device "nvmf_init_if2" 00:16:39.089 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:16:39.089 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:39.089 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:39.089 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:16:39.089 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:39.089 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:39.089 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:16:39.089 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:39.089 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:39.089 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:39.089 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:39.089 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:39.089 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:39.089 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:39.089 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:39.089 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:39.089 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:39.090 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:39.090 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:39.090 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:39.090 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:39.090 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:39.090 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:39.090 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:39.090 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:39.090 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:39.090 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:39.090 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:39.090 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:39.090 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:39.090 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:39.090 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:39.349 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:39.349 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:39.349 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:39.349 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:39.349 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:39.349 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:39.349 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:39.349 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:39.349 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:39.349 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:16:39.349 00:16:39.349 --- 10.0.0.3 ping statistics --- 00:16:39.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.349 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:16:39.349 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:39.349 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:39.349 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:16:39.349 00:16:39.349 --- 10.0.0.4 ping statistics --- 00:16:39.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.349 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:16:39.349 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:39.349 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:39.349 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:16:39.349 00:16:39.349 --- 10.0.0.1 ping statistics --- 00:16:39.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.349 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:16:39.349 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:39.349 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:39.349 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:16:39.349 00:16:39.349 --- 10.0.0.2 ping statistics --- 00:16:39.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.349 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:16:39.349 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:39.349 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # return 0 00:16:39.349 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:39.349 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:39.349 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:16:39.349 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:16:39.349 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:39.349 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:16:39.349 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:16:39.349 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:16:39.349 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:39.349 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:39.349 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.349 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # nvmfpid=77747 00:16:39.349 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # waitforlisten 77747 00:16:39.349 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 77747 ']' 00:16:39.349 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:39.349 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:16:39.349 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:39.349 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:39.349 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:39.349 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.608 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:39.608 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:16:39.608 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:39.608 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:39.608 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.608 09:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:39.608 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:16:39.608 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:16:39.608 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:16:39.608 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:39.608 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:16:39.608 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:16:39.608 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:16:39.608 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:39.608 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=6480220cfa8afdf3c618c30f4100416d 00:16:39.608 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.DZe 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 6480220cfa8afdf3c618c30f4100416d 0 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 6480220cfa8afdf3c618c30f4100416d 0 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=6480220cfa8afdf3c618c30f4100416d 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.DZe 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.DZe 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.DZe 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=edb76ed6f3789205993298fd6eb71d36fabfb0f51e19b3775a23a086933434ea 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.ZhU 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key edb76ed6f3789205993298fd6eb71d36fabfb0f51e19b3775a23a086933434ea 3 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 edb76ed6f3789205993298fd6eb71d36fabfb0f51e19b3775a23a086933434ea 3 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=edb76ed6f3789205993298fd6eb71d36fabfb0f51e19b3775a23a086933434ea 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.ZhU 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.ZhU 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.ZhU 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=4baac3cee438567cdce00d167ca95eb6937b10c18a6d2bf5 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.Hx2 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 4baac3cee438567cdce00d167ca95eb6937b10c18a6d2bf5 0 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 4baac3cee438567cdce00d167ca95eb6937b10c18a6d2bf5 0 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=4baac3cee438567cdce00d167ca95eb6937b10c18a6d2bf5 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.Hx2 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.Hx2 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Hx2 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=39e2ee5ad74c3012171e0f252011e24a3be37427b92470c4 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.HYr 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 39e2ee5ad74c3012171e0f252011e24a3be37427b92470c4 2 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 39e2ee5ad74c3012171e0f252011e24a3be37427b92470c4 2 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=39e2ee5ad74c3012171e0f252011e24a3be37427b92470c4 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.HYr 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.HYr 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.HYr 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=bd7bddbc2a64ecb521b8eb022c8201e6 00:16:39.868 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.1mv 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key bd7bddbc2a64ecb521b8eb022c8201e6 1 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 bd7bddbc2a64ecb521b8eb022c8201e6 1 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=bd7bddbc2a64ecb521b8eb022c8201e6 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.1mv 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.1mv 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.1mv 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=9bd8cfea6346ddd17be28b71200f7f4d 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.Hrn 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 9bd8cfea6346ddd17be28b71200f7f4d 1 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 9bd8cfea6346ddd17be28b71200f7f4d 1 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=9bd8cfea6346ddd17be28b71200f7f4d 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.Hrn 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.Hrn 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Hrn 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=e4d7c27b45b9a57395462336580394d3a7745d1624fe6655 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.RHy 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key e4d7c27b45b9a57395462336580394d3a7745d1624fe6655 2 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 e4d7c27b45b9a57395462336580394d3a7745d1624fe6655 2 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=e4d7c27b45b9a57395462336580394d3a7745d1624fe6655 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.RHy 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.RHy 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.RHy 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=6e9bf939fa6a66279b84b916a4884b9e 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.Rdr 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 6e9bf939fa6a66279b84b916a4884b9e 0 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 6e9bf939fa6a66279b84b916a4884b9e 0 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=6e9bf939fa6a66279b84b916a4884b9e 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:16:40.129 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.Rdr 00:16:40.388 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.Rdr 00:16:40.388 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Rdr 00:16:40.388 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:16:40.388 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:16:40.388 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:40.388 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:16:40.388 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:16:40.388 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:16:40.388 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:40.388 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=6ec97605377623235eb2d7d792ef09514a34f90da1774cde98f850083421e0bd 00:16:40.388 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:16:40.389 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.lI4 00:16:40.389 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 6ec97605377623235eb2d7d792ef09514a34f90da1774cde98f850083421e0bd 3 00:16:40.389 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 6ec97605377623235eb2d7d792ef09514a34f90da1774cde98f850083421e0bd 3 00:16:40.389 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:16:40.389 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:40.389 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=6ec97605377623235eb2d7d792ef09514a34f90da1774cde98f850083421e0bd 00:16:40.389 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:16:40.389 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:16:40.389 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.lI4 00:16:40.389 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.lI4 00:16:40.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.389 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.lI4 00:16:40.389 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:16:40.389 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 77747 00:16:40.389 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 77747 ']' 00:16:40.389 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.389 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:40.389 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.389 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:40.389 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.648 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:40.648 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:16:40.648 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:40.648 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.DZe 00:16:40.648 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.648 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.648 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.648 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.ZhU ]] 00:16:40.648 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ZhU 00:16:40.648 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.648 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.648 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.648 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:40.648 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Hx2 00:16:40.648 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.648 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.648 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.648 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.HYr ]] 00:16:40.648 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.HYr 00:16:40.648 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.648 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.648 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.648 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:40.648 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.1mv 00:16:40.648 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.648 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.648 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.648 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Hrn ]] 00:16:40.648 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Hrn 00:16:40.648 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.648 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.648 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.648 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:40.648 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.RHy 00:16:40.648 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.648 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.648 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.648 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Rdr ]] 00:16:40.648 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Rdr 00:16:40.649 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.649 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.649 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.649 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:40.649 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.lI4 00:16:40.649 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.649 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.649 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.649 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:16:40.649 09:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:16:40.649 09:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:16:40.649 09:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:16:40.649 09:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:16:40.649 09:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:16:40.649 09:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:40.649 09:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:40.649 09:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:16:40.649 09:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:40.649 09:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:16:40.649 09:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:16:40.649 09:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:16:40.649 09:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:16:40.649 09:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:16:40.649 09:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:16:40.649 09:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:40.649 09:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:16:40.649 09:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:16:40.649 09:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # local block nvme 00:16:40.649 09:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:16:40.649 09:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # modprobe nvmet 00:16:40.649 09:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:16:40.649 09:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:41.216 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:41.216 Waiting for block devices as requested 00:16:41.216 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:41.216 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:41.784 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:16:41.784 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:41.784 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:16:41.784 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:16:41.784 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:41.784 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:41.784 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:16:41.784 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:16:41.784 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:16:41.784 No valid GPT data, bailing 00:16:41.784 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:41.784 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:16:41.784 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:16:41.784 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:16:41.784 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:16:41.784 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n2 ]] 00:16:41.784 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme0n2 00:16:41.785 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:16:41.785 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:16:41.785 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:41.785 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme0n2 00:16:41.785 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:16:41.785 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:16:42.043 No valid GPT data, bailing 00:16:42.043 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:16:42.043 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:16:42.043 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:16:42.043 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n2 00:16:42.044 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:16:42.044 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n3 ]] 00:16:42.044 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme0n3 00:16:42.044 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:16:42.044 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:16:42.044 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:42.044 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme0n3 00:16:42.044 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:16:42.044 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:16:42.044 No valid GPT data, bailing 00:16:42.044 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:16:42.044 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:16:42.044 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:16:42.044 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n3 00:16:42.044 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:16:42.044 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme1n1 ]] 00:16:42.044 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme1n1 00:16:42.044 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:16:42.044 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:42.044 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:42.044 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme1n1 00:16:42.044 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:16:42.044 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:16:42.044 No valid GPT data, bailing 00:16:42.044 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:16:42.044 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:16:42.044 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:16:42.044 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme1n1 00:16:42.044 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # [[ -b /dev/nvme1n1 ]] 00:16:42.044 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:42.044 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:16:42.044 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:16:42.044 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:16:42.044 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:16:42.044 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@694 -- # echo /dev/nvme1n1 00:16:42.044 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:16:42.044 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:16:42.044 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo tcp 00:16:42.044 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 4420 00:16:42.044 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo ipv4 00:16:42.044 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:16:42.044 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid=5989d9e2-d339-420e-a2f4-bd87604f111f -a 10.0.0.1 -t tcp -s 4420 00:16:42.044 00:16:42.044 Discovery Log Number of Records 2, Generation counter 2 00:16:42.044 =====Discovery Log Entry 0====== 00:16:42.044 trtype: tcp 00:16:42.044 adrfam: ipv4 00:16:42.044 subtype: current discovery subsystem 00:16:42.044 treq: not specified, sq flow control disable supported 00:16:42.044 portid: 1 00:16:42.044 trsvcid: 4420 00:16:42.044 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:42.044 traddr: 10.0.0.1 00:16:42.044 eflags: none 00:16:42.044 sectype: none 00:16:42.044 =====Discovery Log Entry 1====== 00:16:42.044 trtype: tcp 00:16:42.044 adrfam: ipv4 00:16:42.044 subtype: nvme subsystem 00:16:42.044 treq: not specified, sq flow control disable supported 00:16:42.044 portid: 1 00:16:42.044 trsvcid: 4420 00:16:42.044 subnqn: nqn.2024-02.io.spdk:cnode0 00:16:42.044 traddr: 10.0.0.1 00:16:42.044 eflags: none 00:16:42.044 sectype: none 00:16:42.044 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:16:42.044 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:16:42.303 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:16:42.303 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:42.303 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:42.303 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:42.303 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:42.303 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:42.303 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGJhYWMzY2VlNDM4NTY3Y2RjZTAwZDE2N2NhOTVlYjY5MzdiMTBjMThhNmQyYmY1YDYdQg==: 00:16:42.303 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzllMmVlNWFkNzRjMzAxMjE3MWUwZjI1MjAxMWUyNGEzYmUzNzQyN2I5MjQ3MGM0ZvkiHA==: 00:16:42.303 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:42.303 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:42.303 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGJhYWMzY2VlNDM4NTY3Y2RjZTAwZDE2N2NhOTVlYjY5MzdiMTBjMThhNmQyYmY1YDYdQg==: 00:16:42.303 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzllMmVlNWFkNzRjMzAxMjE3MWUwZjI1MjAxMWUyNGEzYmUzNzQyN2I5MjQ3MGM0ZvkiHA==: ]] 00:16:42.303 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzllMmVlNWFkNzRjMzAxMjE3MWUwZjI1MjAxMWUyNGEzYmUzNzQyN2I5MjQ3MGM0ZvkiHA==: 00:16:42.303 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:16:42.303 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:16:42.303 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:16:42.303 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:42.303 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:16:42.303 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:42.303 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:16:42.303 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:42.303 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:42.303 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:42.303 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:42.303 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.303 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.303 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.303 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:42.303 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:16:42.303 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:16:42.303 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:16:42.303 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:42.303 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:42.303 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:16:42.303 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:42.303 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:16:42.303 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:16:42.303 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:16:42.303 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.303 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.303 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.303 nvme0n1 00:16:42.303 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.303 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:42.303 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.303 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.303 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:42.303 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ4MDIyMGNmYThhZmRmM2M2MThjMzBmNDEwMDQxNmRBpzNj: 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWRiNzZlZDZmMzc4OTIwNTk5MzI5OGZkNmViNzFkMzZmYWJmYjBmNTFlMTliMzc3NWEyM2EwODY5MzM0MzRlYVGfMl4=: 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ4MDIyMGNmYThhZmRmM2M2MThjMzBmNDEwMDQxNmRBpzNj: 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWRiNzZlZDZmMzc4OTIwNTk5MzI5OGZkNmViNzFkMzZmYWJmYjBmNTFlMTliMzc3NWEyM2EwODY5MzM0MzRlYVGfMl4=: ]] 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWRiNzZlZDZmMzc4OTIwNTk5MzI5OGZkNmViNzFkMzZmYWJmYjBmNTFlMTliMzc3NWEyM2EwODY5MzM0MzRlYVGfMl4=: 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.565 nvme0n1 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGJhYWMzY2VlNDM4NTY3Y2RjZTAwZDE2N2NhOTVlYjY5MzdiMTBjMThhNmQyYmY1YDYdQg==: 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzllMmVlNWFkNzRjMzAxMjE3MWUwZjI1MjAxMWUyNGEzYmUzNzQyN2I5MjQ3MGM0ZvkiHA==: 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGJhYWMzY2VlNDM4NTY3Y2RjZTAwZDE2N2NhOTVlYjY5MzdiMTBjMThhNmQyYmY1YDYdQg==: 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzllMmVlNWFkNzRjMzAxMjE3MWUwZjI1MjAxMWUyNGEzYmUzNzQyN2I5MjQ3MGM0ZvkiHA==: ]] 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzllMmVlNWFkNzRjMzAxMjE3MWUwZjI1MjAxMWUyNGEzYmUzNzQyN2I5MjQ3MGM0ZvkiHA==: 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.565 09:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.831 nvme0n1 00:16:42.831 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.831 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:42.831 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:42.831 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.831 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.831 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.831 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.831 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:42.831 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.831 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.831 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.831 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:42.831 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:16:42.831 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:42.831 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:42.831 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:42.831 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:42.831 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmQ3YmRkYmMyYTY0ZWNiNTIxYjhlYjAyMmM4MjAxZTYBRvFI: 00:16:42.831 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWJkOGNmZWE2MzQ2ZGRkMTdiZTI4YjcxMjAwZjdmNGQY+CMz: 00:16:42.831 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:42.831 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:42.831 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmQ3YmRkYmMyYTY0ZWNiNTIxYjhlYjAyMmM4MjAxZTYBRvFI: 00:16:42.831 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWJkOGNmZWE2MzQ2ZGRkMTdiZTI4YjcxMjAwZjdmNGQY+CMz: ]] 00:16:42.831 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWJkOGNmZWE2MzQ2ZGRkMTdiZTI4YjcxMjAwZjdmNGQY+CMz: 00:16:42.831 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:16:42.831 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:42.831 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:42.831 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:42.831 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:42.831 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:42.831 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:42.831 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.831 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.831 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.831 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:42.831 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:16:42.831 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:16:42.831 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:16:42.831 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:42.831 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:42.831 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:16:42.831 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:42.831 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:16:42.831 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:16:42.831 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:16:42.831 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.831 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.831 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.831 nvme0n1 00:16:42.831 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.831 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:42.831 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.831 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.831 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:43.090 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.090 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.090 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:43.090 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.090 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.090 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.090 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:43.090 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:16:43.090 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:43.090 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:43.090 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:43.090 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:43.090 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTRkN2MyN2I0NWI5YTU3Mzk1NDYyMzM2NTgwMzk0ZDNhNzc0NWQxNjI0ZmU2NjU1Z6QlMg==: 00:16:43.090 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU5YmY5MzlmYTZhNjYyNzliODRiOTE2YTQ4ODRiOWUxM3gX: 00:16:43.090 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:43.090 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:43.090 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTRkN2MyN2I0NWI5YTU3Mzk1NDYyMzM2NTgwMzk0ZDNhNzc0NWQxNjI0ZmU2NjU1Z6QlMg==: 00:16:43.090 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU5YmY5MzlmYTZhNjYyNzliODRiOTE2YTQ4ODRiOWUxM3gX: ]] 00:16:43.090 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU5YmY5MzlmYTZhNjYyNzliODRiOTE2YTQ4ODRiOWUxM3gX: 00:16:43.090 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:16:43.090 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:43.090 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:43.090 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:43.090 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:43.090 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:43.090 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:43.090 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.090 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.090 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.090 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:43.090 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:16:43.090 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:16:43.090 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:16:43.091 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:43.091 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:43.091 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:16:43.091 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:43.091 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:16:43.091 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:16:43.091 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:16:43.091 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:43.091 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.091 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.091 nvme0n1 00:16:43.091 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.091 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:43.091 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:43.091 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.091 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.091 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.091 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.091 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:43.091 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.091 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.091 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.091 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:43.091 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:16:43.091 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:43.091 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:43.091 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:43.091 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:43.091 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmVjOTc2MDUzNzc2MjMyMzVlYjJkN2Q3OTJlZjA5NTE0YTM0ZjkwZGExNzc0Y2RlOThmODUwMDgzNDIxZTBiZMFP5uY=: 00:16:43.091 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:43.091 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:43.091 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:43.091 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmVjOTc2MDUzNzc2MjMyMzVlYjJkN2Q3OTJlZjA5NTE0YTM0ZjkwZGExNzc0Y2RlOThmODUwMDgzNDIxZTBiZMFP5uY=: 00:16:43.091 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:43.091 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:16:43.091 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:43.091 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:43.091 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:43.091 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:43.091 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:43.091 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:43.091 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.091 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.091 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.349 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:43.349 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:16:43.349 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:16:43.349 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:16:43.349 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:43.349 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:43.349 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:16:43.349 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:43.349 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:16:43.349 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:16:43.349 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:16:43.349 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:43.349 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.349 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.349 nvme0n1 00:16:43.349 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.349 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:43.349 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.349 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.349 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:43.349 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.349 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.349 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:43.349 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.349 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.349 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.349 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:43.350 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:43.350 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:16:43.350 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:43.350 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:43.350 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:43.350 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:43.350 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ4MDIyMGNmYThhZmRmM2M2MThjMzBmNDEwMDQxNmRBpzNj: 00:16:43.350 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWRiNzZlZDZmMzc4OTIwNTk5MzI5OGZkNmViNzFkMzZmYWJmYjBmNTFlMTliMzc3NWEyM2EwODY5MzM0MzRlYVGfMl4=: 00:16:43.350 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:43.350 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:43.608 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ4MDIyMGNmYThhZmRmM2M2MThjMzBmNDEwMDQxNmRBpzNj: 00:16:43.608 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWRiNzZlZDZmMzc4OTIwNTk5MzI5OGZkNmViNzFkMzZmYWJmYjBmNTFlMTliMzc3NWEyM2EwODY5MzM0MzRlYVGfMl4=: ]] 00:16:43.608 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWRiNzZlZDZmMzc4OTIwNTk5MzI5OGZkNmViNzFkMzZmYWJmYjBmNTFlMTliMzc3NWEyM2EwODY5MzM0MzRlYVGfMl4=: 00:16:43.608 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:16:43.608 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:43.608 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:43.608 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:43.608 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:43.608 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:43.608 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:43.608 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.608 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.608 09:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.608 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:43.608 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:16:43.608 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:16:43.608 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:16:43.608 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:43.608 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:43.608 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:16:43.608 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:43.608 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:16:43.608 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:16:43.608 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:16:43.608 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.608 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.608 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.868 nvme0n1 00:16:43.868 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.868 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:43.868 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.868 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:43.868 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.868 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.868 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.868 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:43.868 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.868 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.868 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.868 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:43.868 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:16:43.868 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:43.868 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:43.868 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:43.868 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:43.868 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGJhYWMzY2VlNDM4NTY3Y2RjZTAwZDE2N2NhOTVlYjY5MzdiMTBjMThhNmQyYmY1YDYdQg==: 00:16:43.868 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzllMmVlNWFkNzRjMzAxMjE3MWUwZjI1MjAxMWUyNGEzYmUzNzQyN2I5MjQ3MGM0ZvkiHA==: 00:16:43.868 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:43.868 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:43.868 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGJhYWMzY2VlNDM4NTY3Y2RjZTAwZDE2N2NhOTVlYjY5MzdiMTBjMThhNmQyYmY1YDYdQg==: 00:16:43.868 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzllMmVlNWFkNzRjMzAxMjE3MWUwZjI1MjAxMWUyNGEzYmUzNzQyN2I5MjQ3MGM0ZvkiHA==: ]] 00:16:43.868 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzllMmVlNWFkNzRjMzAxMjE3MWUwZjI1MjAxMWUyNGEzYmUzNzQyN2I5MjQ3MGM0ZvkiHA==: 00:16:43.868 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:16:43.868 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:43.868 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:43.868 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:43.868 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:43.868 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:43.868 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:43.868 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.868 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.868 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.868 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:43.868 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:16:43.868 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:16:43.868 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:16:43.868 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:43.868 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:43.868 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:16:43.868 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:43.868 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:16:43.868 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:16:43.868 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:16:43.868 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.868 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.868 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.127 nvme0n1 00:16:44.127 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.127 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:44.127 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.127 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:44.127 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.127 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.127 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.127 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:44.127 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.127 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.127 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.127 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:44.127 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:16:44.127 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:44.127 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:44.127 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:44.127 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:44.127 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmQ3YmRkYmMyYTY0ZWNiNTIxYjhlYjAyMmM4MjAxZTYBRvFI: 00:16:44.127 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWJkOGNmZWE2MzQ2ZGRkMTdiZTI4YjcxMjAwZjdmNGQY+CMz: 00:16:44.127 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:44.127 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:44.127 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmQ3YmRkYmMyYTY0ZWNiNTIxYjhlYjAyMmM4MjAxZTYBRvFI: 00:16:44.127 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWJkOGNmZWE2MzQ2ZGRkMTdiZTI4YjcxMjAwZjdmNGQY+CMz: ]] 00:16:44.127 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWJkOGNmZWE2MzQ2ZGRkMTdiZTI4YjcxMjAwZjdmNGQY+CMz: 00:16:44.127 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:16:44.127 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:44.127 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:44.127 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:44.127 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:44.127 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:44.127 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:44.127 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.127 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.127 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.127 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:44.127 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:16:44.127 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:16:44.127 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:16:44.127 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:44.127 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:44.127 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:16:44.127 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:44.127 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:16:44.127 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:16:44.127 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:16:44.127 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.127 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.127 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.127 nvme0n1 00:16:44.127 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.127 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:44.127 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.127 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:44.127 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.386 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.386 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.386 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:44.386 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.386 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.386 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.386 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:44.386 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:16:44.386 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:44.386 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:44.386 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:44.386 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:44.386 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTRkN2MyN2I0NWI5YTU3Mzk1NDYyMzM2NTgwMzk0ZDNhNzc0NWQxNjI0ZmU2NjU1Z6QlMg==: 00:16:44.386 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU5YmY5MzlmYTZhNjYyNzliODRiOTE2YTQ4ODRiOWUxM3gX: 00:16:44.386 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:44.386 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:44.386 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTRkN2MyN2I0NWI5YTU3Mzk1NDYyMzM2NTgwMzk0ZDNhNzc0NWQxNjI0ZmU2NjU1Z6QlMg==: 00:16:44.386 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU5YmY5MzlmYTZhNjYyNzliODRiOTE2YTQ4ODRiOWUxM3gX: ]] 00:16:44.386 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU5YmY5MzlmYTZhNjYyNzliODRiOTE2YTQ4ODRiOWUxM3gX: 00:16:44.386 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:16:44.386 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:44.386 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:44.386 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:44.386 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:44.386 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:44.386 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:44.386 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.386 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.386 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.386 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:44.386 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:16:44.386 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:16:44.386 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:16:44.386 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:44.386 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:44.386 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:16:44.386 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:44.386 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:16:44.386 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:16:44.386 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:16:44.386 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:44.386 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.386 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.386 nvme0n1 00:16:44.386 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.386 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:44.387 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:44.387 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.387 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.387 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.387 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.387 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:44.387 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.387 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.646 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.646 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:44.646 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:16:44.646 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:44.646 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:44.646 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:44.646 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:44.646 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmVjOTc2MDUzNzc2MjMyMzVlYjJkN2Q3OTJlZjA5NTE0YTM0ZjkwZGExNzc0Y2RlOThmODUwMDgzNDIxZTBiZMFP5uY=: 00:16:44.646 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:44.646 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:44.646 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:44.646 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmVjOTc2MDUzNzc2MjMyMzVlYjJkN2Q3OTJlZjA5NTE0YTM0ZjkwZGExNzc0Y2RlOThmODUwMDgzNDIxZTBiZMFP5uY=: 00:16:44.646 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:44.646 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:16:44.646 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:44.646 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:44.646 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:44.646 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:44.646 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:44.646 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:44.646 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.646 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.646 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.646 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:44.646 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:16:44.646 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:16:44.646 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:16:44.646 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:44.646 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:44.646 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:16:44.646 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:44.646 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:16:44.646 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:16:44.646 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:16:44.646 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:44.646 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.646 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.646 nvme0n1 00:16:44.646 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.646 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:44.646 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:44.646 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.646 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.646 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.646 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.646 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:44.646 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.646 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.646 09:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.646 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:44.646 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:44.646 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:16:44.646 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:44.646 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:44.646 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:44.646 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:44.646 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ4MDIyMGNmYThhZmRmM2M2MThjMzBmNDEwMDQxNmRBpzNj: 00:16:44.646 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWRiNzZlZDZmMzc4OTIwNTk5MzI5OGZkNmViNzFkMzZmYWJmYjBmNTFlMTliMzc3NWEyM2EwODY5MzM0MzRlYVGfMl4=: 00:16:44.646 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:44.646 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:45.214 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ4MDIyMGNmYThhZmRmM2M2MThjMzBmNDEwMDQxNmRBpzNj: 00:16:45.214 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWRiNzZlZDZmMzc4OTIwNTk5MzI5OGZkNmViNzFkMzZmYWJmYjBmNTFlMTliMzc3NWEyM2EwODY5MzM0MzRlYVGfMl4=: ]] 00:16:45.214 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWRiNzZlZDZmMzc4OTIwNTk5MzI5OGZkNmViNzFkMzZmYWJmYjBmNTFlMTliMzc3NWEyM2EwODY5MzM0MzRlYVGfMl4=: 00:16:45.214 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:16:45.214 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:45.214 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:45.214 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:45.214 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:45.214 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:45.214 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:45.214 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.214 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.214 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.214 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:45.214 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:16:45.214 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:16:45.214 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:16:45.214 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:45.214 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:45.214 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:16:45.214 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:45.214 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:16:45.214 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:16:45.214 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:16:45.214 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.214 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.214 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.473 nvme0n1 00:16:45.473 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.473 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:45.473 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:45.473 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.473 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.473 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.473 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.473 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:45.473 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.473 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.473 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.473 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:45.473 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:16:45.473 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:45.473 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:45.473 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:45.473 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:45.473 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGJhYWMzY2VlNDM4NTY3Y2RjZTAwZDE2N2NhOTVlYjY5MzdiMTBjMThhNmQyYmY1YDYdQg==: 00:16:45.473 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzllMmVlNWFkNzRjMzAxMjE3MWUwZjI1MjAxMWUyNGEzYmUzNzQyN2I5MjQ3MGM0ZvkiHA==: 00:16:45.473 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:45.473 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:45.473 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGJhYWMzY2VlNDM4NTY3Y2RjZTAwZDE2N2NhOTVlYjY5MzdiMTBjMThhNmQyYmY1YDYdQg==: 00:16:45.473 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzllMmVlNWFkNzRjMzAxMjE3MWUwZjI1MjAxMWUyNGEzYmUzNzQyN2I5MjQ3MGM0ZvkiHA==: ]] 00:16:45.473 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzllMmVlNWFkNzRjMzAxMjE3MWUwZjI1MjAxMWUyNGEzYmUzNzQyN2I5MjQ3MGM0ZvkiHA==: 00:16:45.473 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:16:45.473 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:45.473 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:45.474 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:45.474 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:45.474 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:45.474 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:45.474 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.474 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.474 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.474 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:45.474 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:16:45.474 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:16:45.474 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:16:45.474 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:45.474 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:45.474 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:16:45.474 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:45.474 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:16:45.474 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:16:45.474 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:16:45.474 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.474 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.474 09:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.733 nvme0n1 00:16:45.733 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.733 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:45.733 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:45.733 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.733 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.733 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.733 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.733 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:45.733 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.733 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.733 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.733 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:45.733 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:16:45.733 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:45.733 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:45.733 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:45.733 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:45.733 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmQ3YmRkYmMyYTY0ZWNiNTIxYjhlYjAyMmM4MjAxZTYBRvFI: 00:16:45.733 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWJkOGNmZWE2MzQ2ZGRkMTdiZTI4YjcxMjAwZjdmNGQY+CMz: 00:16:45.733 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:45.733 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:45.733 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmQ3YmRkYmMyYTY0ZWNiNTIxYjhlYjAyMmM4MjAxZTYBRvFI: 00:16:45.733 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWJkOGNmZWE2MzQ2ZGRkMTdiZTI4YjcxMjAwZjdmNGQY+CMz: ]] 00:16:45.733 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWJkOGNmZWE2MzQ2ZGRkMTdiZTI4YjcxMjAwZjdmNGQY+CMz: 00:16:45.733 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:16:45.733 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:45.733 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:45.733 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:45.733 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:45.733 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:45.733 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:45.733 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.733 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.733 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.733 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:45.733 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:16:45.733 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:16:45.733 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:16:45.733 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:45.733 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:45.733 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:16:45.733 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:45.733 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:16:45.733 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:16:45.733 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:16:45.733 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.733 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.733 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.992 nvme0n1 00:16:45.992 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.992 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:45.992 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:45.992 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.993 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.993 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.993 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.993 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:45.993 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.993 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.993 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.993 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:45.993 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:16:45.993 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:45.993 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:45.993 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:45.993 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:45.993 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTRkN2MyN2I0NWI5YTU3Mzk1NDYyMzM2NTgwMzk0ZDNhNzc0NWQxNjI0ZmU2NjU1Z6QlMg==: 00:16:45.993 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU5YmY5MzlmYTZhNjYyNzliODRiOTE2YTQ4ODRiOWUxM3gX: 00:16:45.993 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:45.993 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:45.993 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTRkN2MyN2I0NWI5YTU3Mzk1NDYyMzM2NTgwMzk0ZDNhNzc0NWQxNjI0ZmU2NjU1Z6QlMg==: 00:16:45.993 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU5YmY5MzlmYTZhNjYyNzliODRiOTE2YTQ4ODRiOWUxM3gX: ]] 00:16:45.993 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU5YmY5MzlmYTZhNjYyNzliODRiOTE2YTQ4ODRiOWUxM3gX: 00:16:45.993 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:16:45.993 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:45.993 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:45.993 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:45.993 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:45.993 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:45.993 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:45.993 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.993 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.993 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.993 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:45.993 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:16:45.993 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:16:45.993 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:16:45.993 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:45.993 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:45.993 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:16:45.993 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:45.993 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:16:45.993 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:16:45.993 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:16:45.993 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:45.993 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.993 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.252 nvme0n1 00:16:46.252 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.252 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:46.252 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:46.252 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.252 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.252 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.252 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.252 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:46.252 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.252 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.252 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.252 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:46.252 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:16:46.252 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:46.252 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:46.252 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:46.252 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:46.252 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmVjOTc2MDUzNzc2MjMyMzVlYjJkN2Q3OTJlZjA5NTE0YTM0ZjkwZGExNzc0Y2RlOThmODUwMDgzNDIxZTBiZMFP5uY=: 00:16:46.252 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:46.252 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:46.252 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:46.253 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmVjOTc2MDUzNzc2MjMyMzVlYjJkN2Q3OTJlZjA5NTE0YTM0ZjkwZGExNzc0Y2RlOThmODUwMDgzNDIxZTBiZMFP5uY=: 00:16:46.253 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:46.253 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:16:46.253 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:46.253 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:46.253 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:46.253 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:46.253 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:46.253 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:46.253 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.253 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.253 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.253 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:46.253 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:16:46.253 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:16:46.253 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:16:46.253 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:46.253 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:46.253 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:16:46.253 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:46.253 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:16:46.253 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:16:46.253 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:16:46.253 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:46.253 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.253 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.512 nvme0n1 00:16:46.512 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.512 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:46.512 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:46.512 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.512 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.512 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.512 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.512 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:46.512 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.512 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.512 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.512 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:46.512 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:46.512 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:16:46.512 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:46.512 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:46.512 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:46.512 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:46.512 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ4MDIyMGNmYThhZmRmM2M2MThjMzBmNDEwMDQxNmRBpzNj: 00:16:46.512 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWRiNzZlZDZmMzc4OTIwNTk5MzI5OGZkNmViNzFkMzZmYWJmYjBmNTFlMTliMzc3NWEyM2EwODY5MzM0MzRlYVGfMl4=: 00:16:46.512 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:46.512 09:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:48.416 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ4MDIyMGNmYThhZmRmM2M2MThjMzBmNDEwMDQxNmRBpzNj: 00:16:48.416 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWRiNzZlZDZmMzc4OTIwNTk5MzI5OGZkNmViNzFkMzZmYWJmYjBmNTFlMTliMzc3NWEyM2EwODY5MzM0MzRlYVGfMl4=: ]] 00:16:48.416 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWRiNzZlZDZmMzc4OTIwNTk5MzI5OGZkNmViNzFkMzZmYWJmYjBmNTFlMTliMzc3NWEyM2EwODY5MzM0MzRlYVGfMl4=: 00:16:48.416 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:16:48.416 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:48.416 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:48.416 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:48.416 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:48.416 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:48.416 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:48.416 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.416 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.416 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.416 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:48.416 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:16:48.416 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:16:48.416 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:16:48.416 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:48.416 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:48.416 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:16:48.416 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:48.416 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:16:48.416 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:16:48.416 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:16:48.416 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.416 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.416 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.416 nvme0n1 00:16:48.416 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.416 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:48.416 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:48.416 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.416 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.416 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.416 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.416 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:48.416 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.416 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.416 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.416 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:48.416 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:16:48.416 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:48.416 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:48.416 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:48.416 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:48.416 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGJhYWMzY2VlNDM4NTY3Y2RjZTAwZDE2N2NhOTVlYjY5MzdiMTBjMThhNmQyYmY1YDYdQg==: 00:16:48.416 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzllMmVlNWFkNzRjMzAxMjE3MWUwZjI1MjAxMWUyNGEzYmUzNzQyN2I5MjQ3MGM0ZvkiHA==: 00:16:48.416 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:48.416 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:48.416 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGJhYWMzY2VlNDM4NTY3Y2RjZTAwZDE2N2NhOTVlYjY5MzdiMTBjMThhNmQyYmY1YDYdQg==: 00:16:48.416 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzllMmVlNWFkNzRjMzAxMjE3MWUwZjI1MjAxMWUyNGEzYmUzNzQyN2I5MjQ3MGM0ZvkiHA==: ]] 00:16:48.416 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzllMmVlNWFkNzRjMzAxMjE3MWUwZjI1MjAxMWUyNGEzYmUzNzQyN2I5MjQ3MGM0ZvkiHA==: 00:16:48.416 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:16:48.416 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:48.416 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:48.417 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:48.417 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:48.417 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:48.417 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:48.417 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.417 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.417 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.417 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:48.417 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:16:48.417 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:16:48.417 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:16:48.417 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:48.417 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:48.417 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:16:48.417 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:48.417 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:16:48.417 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:16:48.417 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:16:48.417 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.417 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.417 09:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.675 nvme0n1 00:16:48.675 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.675 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:48.675 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:48.675 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.675 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.934 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.934 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.934 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:48.934 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.934 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.934 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.934 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:48.935 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:16:48.935 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:48.935 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:48.935 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:48.935 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:48.935 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmQ3YmRkYmMyYTY0ZWNiNTIxYjhlYjAyMmM4MjAxZTYBRvFI: 00:16:48.935 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWJkOGNmZWE2MzQ2ZGRkMTdiZTI4YjcxMjAwZjdmNGQY+CMz: 00:16:48.935 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:48.935 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:48.935 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmQ3YmRkYmMyYTY0ZWNiNTIxYjhlYjAyMmM4MjAxZTYBRvFI: 00:16:48.935 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWJkOGNmZWE2MzQ2ZGRkMTdiZTI4YjcxMjAwZjdmNGQY+CMz: ]] 00:16:48.935 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWJkOGNmZWE2MzQ2ZGRkMTdiZTI4YjcxMjAwZjdmNGQY+CMz: 00:16:48.935 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:16:48.935 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:48.935 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:48.935 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:48.935 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:48.935 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:48.935 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:48.935 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.935 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.935 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.935 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:48.935 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:16:48.935 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:16:48.935 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:16:48.935 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:48.935 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:48.935 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:16:48.935 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:48.935 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:16:48.935 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:16:48.935 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:16:48.935 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.935 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.935 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.197 nvme0n1 00:16:49.197 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.197 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:49.197 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.197 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:49.197 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.197 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.197 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.198 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:49.198 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.198 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.198 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.198 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:49.198 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:16:49.198 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:49.198 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:49.198 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:49.198 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:49.198 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTRkN2MyN2I0NWI5YTU3Mzk1NDYyMzM2NTgwMzk0ZDNhNzc0NWQxNjI0ZmU2NjU1Z6QlMg==: 00:16:49.198 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU5YmY5MzlmYTZhNjYyNzliODRiOTE2YTQ4ODRiOWUxM3gX: 00:16:49.198 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:49.198 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:49.198 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTRkN2MyN2I0NWI5YTU3Mzk1NDYyMzM2NTgwMzk0ZDNhNzc0NWQxNjI0ZmU2NjU1Z6QlMg==: 00:16:49.198 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU5YmY5MzlmYTZhNjYyNzliODRiOTE2YTQ4ODRiOWUxM3gX: ]] 00:16:49.198 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU5YmY5MzlmYTZhNjYyNzliODRiOTE2YTQ4ODRiOWUxM3gX: 00:16:49.198 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:16:49.198 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:49.198 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:49.198 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:49.198 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:49.198 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:49.198 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:49.198 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.198 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.198 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.198 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:49.198 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:16:49.198 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:16:49.198 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:16:49.198 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:49.198 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:49.198 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:16:49.198 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:49.198 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:16:49.198 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:16:49.198 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:16:49.198 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:49.198 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.198 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.456 nvme0n1 00:16:49.456 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.456 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:49.456 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:49.456 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.456 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.714 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.714 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.714 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:49.714 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.714 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.715 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.715 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:49.715 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:16:49.715 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:49.715 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:49.715 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:49.715 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:49.715 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmVjOTc2MDUzNzc2MjMyMzVlYjJkN2Q3OTJlZjA5NTE0YTM0ZjkwZGExNzc0Y2RlOThmODUwMDgzNDIxZTBiZMFP5uY=: 00:16:49.715 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:49.715 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:49.715 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:49.715 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmVjOTc2MDUzNzc2MjMyMzVlYjJkN2Q3OTJlZjA5NTE0YTM0ZjkwZGExNzc0Y2RlOThmODUwMDgzNDIxZTBiZMFP5uY=: 00:16:49.715 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:49.715 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:16:49.715 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:49.715 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:49.715 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:49.715 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:49.715 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:49.715 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:49.715 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.715 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.715 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.715 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:49.715 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:16:49.715 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:16:49.715 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:16:49.715 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:49.715 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:49.715 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:16:49.715 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:49.715 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:16:49.715 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:16:49.715 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:16:49.715 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:49.715 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.715 09:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.976 nvme0n1 00:16:49.976 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.976 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:49.976 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.976 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.976 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:49.976 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.976 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.976 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:49.976 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.976 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.976 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.976 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:49.976 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:49.976 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:16:49.976 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:49.976 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:49.976 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:49.976 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:49.976 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ4MDIyMGNmYThhZmRmM2M2MThjMzBmNDEwMDQxNmRBpzNj: 00:16:49.976 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWRiNzZlZDZmMzc4OTIwNTk5MzI5OGZkNmViNzFkMzZmYWJmYjBmNTFlMTliMzc3NWEyM2EwODY5MzM0MzRlYVGfMl4=: 00:16:49.976 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:49.976 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:49.976 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ4MDIyMGNmYThhZmRmM2M2MThjMzBmNDEwMDQxNmRBpzNj: 00:16:49.976 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWRiNzZlZDZmMzc4OTIwNTk5MzI5OGZkNmViNzFkMzZmYWJmYjBmNTFlMTliMzc3NWEyM2EwODY5MzM0MzRlYVGfMl4=: ]] 00:16:49.976 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWRiNzZlZDZmMzc4OTIwNTk5MzI5OGZkNmViNzFkMzZmYWJmYjBmNTFlMTliMzc3NWEyM2EwODY5MzM0MzRlYVGfMl4=: 00:16:49.976 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:16:49.976 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:49.976 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:49.976 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:49.976 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:49.976 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:49.976 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:49.976 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.976 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.976 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.976 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:49.976 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:16:49.976 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:16:49.976 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:16:49.976 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:49.976 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:49.976 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:16:49.976 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:49.976 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:16:49.976 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:16:49.976 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:16:49.976 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.976 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.976 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.544 nvme0n1 00:16:50.544 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.544 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:50.544 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:50.544 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.544 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.544 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.544 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.544 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:50.544 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.544 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.544 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.544 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:50.544 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:16:50.544 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:50.544 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:50.544 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:50.544 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:50.544 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGJhYWMzY2VlNDM4NTY3Y2RjZTAwZDE2N2NhOTVlYjY5MzdiMTBjMThhNmQyYmY1YDYdQg==: 00:16:50.544 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzllMmVlNWFkNzRjMzAxMjE3MWUwZjI1MjAxMWUyNGEzYmUzNzQyN2I5MjQ3MGM0ZvkiHA==: 00:16:50.544 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:50.544 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:50.544 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGJhYWMzY2VlNDM4NTY3Y2RjZTAwZDE2N2NhOTVlYjY5MzdiMTBjMThhNmQyYmY1YDYdQg==: 00:16:50.544 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzllMmVlNWFkNzRjMzAxMjE3MWUwZjI1MjAxMWUyNGEzYmUzNzQyN2I5MjQ3MGM0ZvkiHA==: ]] 00:16:50.544 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzllMmVlNWFkNzRjMzAxMjE3MWUwZjI1MjAxMWUyNGEzYmUzNzQyN2I5MjQ3MGM0ZvkiHA==: 00:16:50.544 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:16:50.544 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:50.544 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:50.544 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:50.544 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:50.544 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:50.544 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:50.544 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.544 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.544 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.544 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:50.544 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:16:50.544 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:16:50.544 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:16:50.544 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:50.544 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:50.544 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:16:50.544 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:50.544 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:16:50.544 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:16:50.544 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:16:50.544 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.544 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.544 09:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.111 nvme0n1 00:16:51.111 09:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.111 09:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:51.111 09:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.112 09:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:51.112 09:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.112 09:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.112 09:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.112 09:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:51.112 09:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.112 09:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.112 09:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.112 09:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:51.112 09:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:16:51.112 09:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:51.112 09:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:51.112 09:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:51.112 09:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:51.112 09:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmQ3YmRkYmMyYTY0ZWNiNTIxYjhlYjAyMmM4MjAxZTYBRvFI: 00:16:51.112 09:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWJkOGNmZWE2MzQ2ZGRkMTdiZTI4YjcxMjAwZjdmNGQY+CMz: 00:16:51.112 09:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:51.112 09:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:51.112 09:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmQ3YmRkYmMyYTY0ZWNiNTIxYjhlYjAyMmM4MjAxZTYBRvFI: 00:16:51.112 09:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWJkOGNmZWE2MzQ2ZGRkMTdiZTI4YjcxMjAwZjdmNGQY+CMz: ]] 00:16:51.112 09:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWJkOGNmZWE2MzQ2ZGRkMTdiZTI4YjcxMjAwZjdmNGQY+CMz: 00:16:51.112 09:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:16:51.112 09:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:51.112 09:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:51.112 09:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:51.112 09:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:51.112 09:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:51.112 09:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:51.112 09:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.112 09:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.112 09:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.112 09:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:51.112 09:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:16:51.112 09:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:16:51.112 09:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:16:51.112 09:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:51.112 09:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:51.112 09:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:16:51.112 09:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:51.112 09:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:16:51.112 09:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:16:51.112 09:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:16:51.112 09:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.112 09:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.112 09:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.679 nvme0n1 00:16:51.679 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.679 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:51.679 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.679 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:51.679 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.679 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.679 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.679 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:51.679 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.680 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.680 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.680 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:51.680 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:16:51.680 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:51.680 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:51.680 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:51.680 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:51.680 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTRkN2MyN2I0NWI5YTU3Mzk1NDYyMzM2NTgwMzk0ZDNhNzc0NWQxNjI0ZmU2NjU1Z6QlMg==: 00:16:51.680 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU5YmY5MzlmYTZhNjYyNzliODRiOTE2YTQ4ODRiOWUxM3gX: 00:16:51.680 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:51.680 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:51.680 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTRkN2MyN2I0NWI5YTU3Mzk1NDYyMzM2NTgwMzk0ZDNhNzc0NWQxNjI0ZmU2NjU1Z6QlMg==: 00:16:51.680 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU5YmY5MzlmYTZhNjYyNzliODRiOTE2YTQ4ODRiOWUxM3gX: ]] 00:16:51.680 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU5YmY5MzlmYTZhNjYyNzliODRiOTE2YTQ4ODRiOWUxM3gX: 00:16:51.680 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:16:51.680 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:51.680 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:51.680 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:51.680 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:51.680 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:51.680 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:51.680 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.680 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.939 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.939 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:51.939 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:16:51.939 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:16:51.939 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:16:51.939 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:51.939 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:51.939 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:16:51.939 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:51.939 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:16:51.939 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:16:51.939 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:16:51.939 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:51.939 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.939 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.506 nvme0n1 00:16:52.506 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.506 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:52.506 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:52.506 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.506 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.506 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.506 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.506 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:52.506 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.506 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.506 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.506 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:52.506 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:16:52.506 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:52.506 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:52.506 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:52.506 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:52.506 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmVjOTc2MDUzNzc2MjMyMzVlYjJkN2Q3OTJlZjA5NTE0YTM0ZjkwZGExNzc0Y2RlOThmODUwMDgzNDIxZTBiZMFP5uY=: 00:16:52.506 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:52.506 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:52.506 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:52.506 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmVjOTc2MDUzNzc2MjMyMzVlYjJkN2Q3OTJlZjA5NTE0YTM0ZjkwZGExNzc0Y2RlOThmODUwMDgzNDIxZTBiZMFP5uY=: 00:16:52.506 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:52.506 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:16:52.506 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:52.506 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:52.506 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:52.506 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:52.506 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:52.506 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:52.506 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.506 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.506 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.506 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:52.506 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:16:52.506 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:16:52.506 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:16:52.506 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:52.506 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:52.506 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:16:52.506 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:52.506 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:16:52.506 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:16:52.506 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:16:52.506 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:52.506 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.506 09:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.073 nvme0n1 00:16:53.073 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.073 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:53.073 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.073 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.073 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:53.073 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.073 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.073 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:53.073 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.073 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.073 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.073 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:16:53.073 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:53.073 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:53.073 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:16:53.073 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:53.073 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:53.073 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:53.073 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:53.073 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ4MDIyMGNmYThhZmRmM2M2MThjMzBmNDEwMDQxNmRBpzNj: 00:16:53.073 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWRiNzZlZDZmMzc4OTIwNTk5MzI5OGZkNmViNzFkMzZmYWJmYjBmNTFlMTliMzc3NWEyM2EwODY5MzM0MzRlYVGfMl4=: 00:16:53.073 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:53.073 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:53.073 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ4MDIyMGNmYThhZmRmM2M2MThjMzBmNDEwMDQxNmRBpzNj: 00:16:53.073 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWRiNzZlZDZmMzc4OTIwNTk5MzI5OGZkNmViNzFkMzZmYWJmYjBmNTFlMTliMzc3NWEyM2EwODY5MzM0MzRlYVGfMl4=: ]] 00:16:53.073 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWRiNzZlZDZmMzc4OTIwNTk5MzI5OGZkNmViNzFkMzZmYWJmYjBmNTFlMTliMzc3NWEyM2EwODY5MzM0MzRlYVGfMl4=: 00:16:53.073 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:16:53.073 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:53.073 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:53.074 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:53.074 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:53.074 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:53.074 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:53.074 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.074 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.074 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.074 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:53.074 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:16:53.074 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:16:53.074 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:16:53.074 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:53.074 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:53.074 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:16:53.074 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:53.074 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:16:53.074 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:16:53.074 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:16:53.074 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.074 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.074 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.074 nvme0n1 00:16:53.074 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.074 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:53.074 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.074 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.074 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:53.074 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.074 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.074 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:53.074 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.074 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.332 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGJhYWMzY2VlNDM4NTY3Y2RjZTAwZDE2N2NhOTVlYjY5MzdiMTBjMThhNmQyYmY1YDYdQg==: 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzllMmVlNWFkNzRjMzAxMjE3MWUwZjI1MjAxMWUyNGEzYmUzNzQyN2I5MjQ3MGM0ZvkiHA==: 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGJhYWMzY2VlNDM4NTY3Y2RjZTAwZDE2N2NhOTVlYjY5MzdiMTBjMThhNmQyYmY1YDYdQg==: 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzllMmVlNWFkNzRjMzAxMjE3MWUwZjI1MjAxMWUyNGEzYmUzNzQyN2I5MjQ3MGM0ZvkiHA==: ]] 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzllMmVlNWFkNzRjMzAxMjE3MWUwZjI1MjAxMWUyNGEzYmUzNzQyN2I5MjQ3MGM0ZvkiHA==: 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.333 nvme0n1 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmQ3YmRkYmMyYTY0ZWNiNTIxYjhlYjAyMmM4MjAxZTYBRvFI: 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWJkOGNmZWE2MzQ2ZGRkMTdiZTI4YjcxMjAwZjdmNGQY+CMz: 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmQ3YmRkYmMyYTY0ZWNiNTIxYjhlYjAyMmM4MjAxZTYBRvFI: 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWJkOGNmZWE2MzQ2ZGRkMTdiZTI4YjcxMjAwZjdmNGQY+CMz: ]] 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWJkOGNmZWE2MzQ2ZGRkMTdiZTI4YjcxMjAwZjdmNGQY+CMz: 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.333 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.593 nvme0n1 00:16:53.593 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.593 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:53.593 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:53.593 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.593 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.593 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.593 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.593 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:53.593 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.593 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.593 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.593 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:53.593 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:16:53.593 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:53.593 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:53.593 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:53.593 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:53.593 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTRkN2MyN2I0NWI5YTU3Mzk1NDYyMzM2NTgwMzk0ZDNhNzc0NWQxNjI0ZmU2NjU1Z6QlMg==: 00:16:53.593 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU5YmY5MzlmYTZhNjYyNzliODRiOTE2YTQ4ODRiOWUxM3gX: 00:16:53.593 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:53.593 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:53.593 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTRkN2MyN2I0NWI5YTU3Mzk1NDYyMzM2NTgwMzk0ZDNhNzc0NWQxNjI0ZmU2NjU1Z6QlMg==: 00:16:53.593 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU5YmY5MzlmYTZhNjYyNzliODRiOTE2YTQ4ODRiOWUxM3gX: ]] 00:16:53.593 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU5YmY5MzlmYTZhNjYyNzliODRiOTE2YTQ4ODRiOWUxM3gX: 00:16:53.593 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:16:53.593 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:53.593 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:53.593 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:53.593 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:53.593 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:53.593 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:53.593 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.593 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.593 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.593 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:53.593 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:16:53.593 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:16:53.593 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:16:53.593 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:53.593 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:53.593 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:16:53.593 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:53.593 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:16:53.593 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:16:53.593 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:16:53.593 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:53.593 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.593 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.593 nvme0n1 00:16:53.593 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.593 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:53.593 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:53.593 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.593 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.593 09:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmVjOTc2MDUzNzc2MjMyMzVlYjJkN2Q3OTJlZjA5NTE0YTM0ZjkwZGExNzc0Y2RlOThmODUwMDgzNDIxZTBiZMFP5uY=: 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmVjOTc2MDUzNzc2MjMyMzVlYjJkN2Q3OTJlZjA5NTE0YTM0ZjkwZGExNzc0Y2RlOThmODUwMDgzNDIxZTBiZMFP5uY=: 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.853 nvme0n1 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ4MDIyMGNmYThhZmRmM2M2MThjMzBmNDEwMDQxNmRBpzNj: 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWRiNzZlZDZmMzc4OTIwNTk5MzI5OGZkNmViNzFkMzZmYWJmYjBmNTFlMTliMzc3NWEyM2EwODY5MzM0MzRlYVGfMl4=: 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ4MDIyMGNmYThhZmRmM2M2MThjMzBmNDEwMDQxNmRBpzNj: 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWRiNzZlZDZmMzc4OTIwNTk5MzI5OGZkNmViNzFkMzZmYWJmYjBmNTFlMTliMzc3NWEyM2EwODY5MzM0MzRlYVGfMl4=: ]] 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWRiNzZlZDZmMzc4OTIwNTk5MzI5OGZkNmViNzFkMzZmYWJmYjBmNTFlMTliMzc3NWEyM2EwODY5MzM0MzRlYVGfMl4=: 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.853 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.113 nvme0n1 00:16:54.113 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.113 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:54.113 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:54.113 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.113 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.113 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.113 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.113 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:54.113 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.113 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.113 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.113 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:54.113 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:16:54.113 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:54.113 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:54.113 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:54.113 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:54.113 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGJhYWMzY2VlNDM4NTY3Y2RjZTAwZDE2N2NhOTVlYjY5MzdiMTBjMThhNmQyYmY1YDYdQg==: 00:16:54.113 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzllMmVlNWFkNzRjMzAxMjE3MWUwZjI1MjAxMWUyNGEzYmUzNzQyN2I5MjQ3MGM0ZvkiHA==: 00:16:54.113 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:54.113 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:54.113 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGJhYWMzY2VlNDM4NTY3Y2RjZTAwZDE2N2NhOTVlYjY5MzdiMTBjMThhNmQyYmY1YDYdQg==: 00:16:54.113 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzllMmVlNWFkNzRjMzAxMjE3MWUwZjI1MjAxMWUyNGEzYmUzNzQyN2I5MjQ3MGM0ZvkiHA==: ]] 00:16:54.113 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzllMmVlNWFkNzRjMzAxMjE3MWUwZjI1MjAxMWUyNGEzYmUzNzQyN2I5MjQ3MGM0ZvkiHA==: 00:16:54.113 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:16:54.113 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:54.113 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:54.113 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:54.113 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:54.113 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:54.113 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:54.113 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.113 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.113 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.113 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:54.113 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:16:54.113 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:16:54.113 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:16:54.113 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:54.113 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:54.113 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:16:54.113 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:54.113 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:16:54.113 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:16:54.113 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:16:54.113 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.113 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.113 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.372 nvme0n1 00:16:54.372 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.372 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:54.372 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:54.372 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.372 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.372 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.372 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.372 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:54.372 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.372 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.372 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.372 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:54.373 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:16:54.373 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:54.373 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:54.373 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:54.373 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:54.373 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmQ3YmRkYmMyYTY0ZWNiNTIxYjhlYjAyMmM4MjAxZTYBRvFI: 00:16:54.373 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWJkOGNmZWE2MzQ2ZGRkMTdiZTI4YjcxMjAwZjdmNGQY+CMz: 00:16:54.373 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:54.373 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:54.373 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmQ3YmRkYmMyYTY0ZWNiNTIxYjhlYjAyMmM4MjAxZTYBRvFI: 00:16:54.373 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWJkOGNmZWE2MzQ2ZGRkMTdiZTI4YjcxMjAwZjdmNGQY+CMz: ]] 00:16:54.373 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWJkOGNmZWE2MzQ2ZGRkMTdiZTI4YjcxMjAwZjdmNGQY+CMz: 00:16:54.373 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:16:54.373 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:54.373 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:54.373 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:54.373 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:54.373 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:54.373 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:54.373 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.373 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.373 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.373 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:54.373 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:16:54.373 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:16:54.373 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:16:54.373 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:54.373 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:54.373 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:16:54.373 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:54.373 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:16:54.373 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:16:54.373 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:16:54.373 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.373 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.373 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.373 nvme0n1 00:16:54.373 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.373 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:54.373 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:54.373 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.373 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.373 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.632 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.632 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:54.632 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.632 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.632 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.632 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:54.632 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:16:54.632 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:54.632 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:54.632 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:54.632 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:54.632 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTRkN2MyN2I0NWI5YTU3Mzk1NDYyMzM2NTgwMzk0ZDNhNzc0NWQxNjI0ZmU2NjU1Z6QlMg==: 00:16:54.632 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU5YmY5MzlmYTZhNjYyNzliODRiOTE2YTQ4ODRiOWUxM3gX: 00:16:54.632 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:54.632 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:54.632 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTRkN2MyN2I0NWI5YTU3Mzk1NDYyMzM2NTgwMzk0ZDNhNzc0NWQxNjI0ZmU2NjU1Z6QlMg==: 00:16:54.632 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU5YmY5MzlmYTZhNjYyNzliODRiOTE2YTQ4ODRiOWUxM3gX: ]] 00:16:54.632 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU5YmY5MzlmYTZhNjYyNzliODRiOTE2YTQ4ODRiOWUxM3gX: 00:16:54.632 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:16:54.632 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:54.632 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:54.632 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:54.632 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:54.632 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:54.632 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:54.632 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.632 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.632 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.632 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:54.632 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:16:54.632 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:16:54.632 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:16:54.632 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:54.632 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:54.632 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:16:54.632 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:54.632 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:16:54.632 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:16:54.632 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:16:54.632 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:54.632 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.632 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.632 nvme0n1 00:16:54.632 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.632 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:54.632 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:54.632 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.632 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.632 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.633 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.633 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:54.633 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.633 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.633 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.633 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:54.633 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:16:54.633 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:54.633 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:54.633 09:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:54.633 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:54.633 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmVjOTc2MDUzNzc2MjMyMzVlYjJkN2Q3OTJlZjA5NTE0YTM0ZjkwZGExNzc0Y2RlOThmODUwMDgzNDIxZTBiZMFP5uY=: 00:16:54.633 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:54.633 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:54.633 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:54.633 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmVjOTc2MDUzNzc2MjMyMzVlYjJkN2Q3OTJlZjA5NTE0YTM0ZjkwZGExNzc0Y2RlOThmODUwMDgzNDIxZTBiZMFP5uY=: 00:16:54.633 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:54.633 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:16:54.633 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:54.633 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:54.633 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:54.633 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:54.633 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:54.633 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:54.633 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.633 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.633 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.633 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:54.633 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:16:54.633 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:16:54.633 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:16:54.633 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:54.633 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:54.633 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:16:54.633 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:54.633 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:16:54.633 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:16:54.633 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:16:54.633 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:54.633 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.633 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.892 nvme0n1 00:16:54.892 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.892 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:54.892 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:54.892 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.892 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.892 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.892 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.892 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:54.892 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.892 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.892 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.892 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:54.892 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:54.892 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:16:54.892 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:54.892 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:54.892 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:54.892 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:54.892 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ4MDIyMGNmYThhZmRmM2M2MThjMzBmNDEwMDQxNmRBpzNj: 00:16:54.892 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWRiNzZlZDZmMzc4OTIwNTk5MzI5OGZkNmViNzFkMzZmYWJmYjBmNTFlMTliMzc3NWEyM2EwODY5MzM0MzRlYVGfMl4=: 00:16:54.892 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:54.892 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:54.892 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ4MDIyMGNmYThhZmRmM2M2MThjMzBmNDEwMDQxNmRBpzNj: 00:16:54.892 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWRiNzZlZDZmMzc4OTIwNTk5MzI5OGZkNmViNzFkMzZmYWJmYjBmNTFlMTliMzc3NWEyM2EwODY5MzM0MzRlYVGfMl4=: ]] 00:16:54.892 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWRiNzZlZDZmMzc4OTIwNTk5MzI5OGZkNmViNzFkMzZmYWJmYjBmNTFlMTliMzc3NWEyM2EwODY5MzM0MzRlYVGfMl4=: 00:16:54.892 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:16:54.892 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:54.892 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:54.892 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:54.892 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:54.892 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:54.892 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:54.892 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.892 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.892 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.892 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:54.892 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:16:54.892 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:16:54.892 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:16:54.892 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:54.892 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:54.892 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:16:54.892 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:54.892 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:16:54.893 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:16:54.893 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:16:54.893 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.893 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.893 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.152 nvme0n1 00:16:55.152 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.152 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:55.152 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:55.152 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.152 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.152 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.152 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.152 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:55.152 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.152 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.152 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.152 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:55.152 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:16:55.152 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:55.152 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:55.152 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:55.152 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:55.152 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGJhYWMzY2VlNDM4NTY3Y2RjZTAwZDE2N2NhOTVlYjY5MzdiMTBjMThhNmQyYmY1YDYdQg==: 00:16:55.152 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzllMmVlNWFkNzRjMzAxMjE3MWUwZjI1MjAxMWUyNGEzYmUzNzQyN2I5MjQ3MGM0ZvkiHA==: 00:16:55.152 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:55.152 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:55.152 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGJhYWMzY2VlNDM4NTY3Y2RjZTAwZDE2N2NhOTVlYjY5MzdiMTBjMThhNmQyYmY1YDYdQg==: 00:16:55.152 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzllMmVlNWFkNzRjMzAxMjE3MWUwZjI1MjAxMWUyNGEzYmUzNzQyN2I5MjQ3MGM0ZvkiHA==: ]] 00:16:55.152 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzllMmVlNWFkNzRjMzAxMjE3MWUwZjI1MjAxMWUyNGEzYmUzNzQyN2I5MjQ3MGM0ZvkiHA==: 00:16:55.152 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:16:55.152 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:55.152 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:55.152 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:55.152 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:55.152 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:55.152 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:55.152 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.152 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.152 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.152 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:55.152 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:16:55.152 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:16:55.152 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:16:55.152 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:55.152 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:55.152 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:16:55.152 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:55.152 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:16:55.152 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:16:55.152 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:16:55.152 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.152 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.152 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.411 nvme0n1 00:16:55.411 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.411 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:55.411 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.411 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:55.411 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.411 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.411 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.411 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:55.411 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.411 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.411 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.411 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:55.411 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:16:55.411 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:55.411 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:55.411 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:55.411 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:55.411 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmQ3YmRkYmMyYTY0ZWNiNTIxYjhlYjAyMmM4MjAxZTYBRvFI: 00:16:55.411 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWJkOGNmZWE2MzQ2ZGRkMTdiZTI4YjcxMjAwZjdmNGQY+CMz: 00:16:55.411 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:55.411 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:55.411 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmQ3YmRkYmMyYTY0ZWNiNTIxYjhlYjAyMmM4MjAxZTYBRvFI: 00:16:55.411 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWJkOGNmZWE2MzQ2ZGRkMTdiZTI4YjcxMjAwZjdmNGQY+CMz: ]] 00:16:55.411 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWJkOGNmZWE2MzQ2ZGRkMTdiZTI4YjcxMjAwZjdmNGQY+CMz: 00:16:55.411 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:16:55.411 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:55.411 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:55.411 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:55.411 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:55.411 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:55.411 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:55.411 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.411 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.411 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.411 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:55.411 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:16:55.411 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:16:55.411 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:16:55.411 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:55.411 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:55.411 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:16:55.411 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:55.411 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:16:55.411 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:16:55.411 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:16:55.411 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:55.411 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.411 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.670 nvme0n1 00:16:55.670 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.670 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:55.670 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:55.670 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.670 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.670 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.670 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.670 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:55.670 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.670 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.670 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.670 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:55.670 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:16:55.670 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:55.670 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:55.670 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:55.670 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:55.670 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTRkN2MyN2I0NWI5YTU3Mzk1NDYyMzM2NTgwMzk0ZDNhNzc0NWQxNjI0ZmU2NjU1Z6QlMg==: 00:16:55.670 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU5YmY5MzlmYTZhNjYyNzliODRiOTE2YTQ4ODRiOWUxM3gX: 00:16:55.670 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:55.670 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:55.670 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTRkN2MyN2I0NWI5YTU3Mzk1NDYyMzM2NTgwMzk0ZDNhNzc0NWQxNjI0ZmU2NjU1Z6QlMg==: 00:16:55.670 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU5YmY5MzlmYTZhNjYyNzliODRiOTE2YTQ4ODRiOWUxM3gX: ]] 00:16:55.670 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU5YmY5MzlmYTZhNjYyNzliODRiOTE2YTQ4ODRiOWUxM3gX: 00:16:55.670 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:16:55.670 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:55.670 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:55.670 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:55.670 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:55.671 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:55.671 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:55.671 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.671 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.671 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.671 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:55.671 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:16:55.671 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:16:55.671 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:16:55.671 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:55.671 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:55.671 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:16:55.671 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:55.671 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:16:55.671 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:16:55.671 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:16:55.671 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:55.671 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.671 09:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.930 nvme0n1 00:16:55.930 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.930 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:55.930 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:55.930 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.930 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.930 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.930 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.930 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:55.930 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.930 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.930 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.930 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:55.930 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:16:55.930 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:55.930 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:55.930 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:55.930 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:55.930 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmVjOTc2MDUzNzc2MjMyMzVlYjJkN2Q3OTJlZjA5NTE0YTM0ZjkwZGExNzc0Y2RlOThmODUwMDgzNDIxZTBiZMFP5uY=: 00:16:55.930 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:55.930 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:55.930 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:55.930 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmVjOTc2MDUzNzc2MjMyMzVlYjJkN2Q3OTJlZjA5NTE0YTM0ZjkwZGExNzc0Y2RlOThmODUwMDgzNDIxZTBiZMFP5uY=: 00:16:55.930 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:55.930 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:16:55.930 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:55.930 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:55.930 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:55.930 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:55.930 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:55.930 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:55.930 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.930 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.930 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.930 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:55.930 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:16:55.930 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:16:55.930 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:16:55.930 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:55.930 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:55.930 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:16:55.930 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:55.930 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:16:55.930 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:16:55.930 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:16:55.930 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:55.930 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.930 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.189 nvme0n1 00:16:56.189 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.189 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:56.189 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.189 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:56.189 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.189 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.189 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.189 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:56.189 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.189 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.189 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.189 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:56.189 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:56.189 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:16:56.189 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:56.189 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:56.189 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:56.189 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:56.189 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ4MDIyMGNmYThhZmRmM2M2MThjMzBmNDEwMDQxNmRBpzNj: 00:16:56.189 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWRiNzZlZDZmMzc4OTIwNTk5MzI5OGZkNmViNzFkMzZmYWJmYjBmNTFlMTliMzc3NWEyM2EwODY5MzM0MzRlYVGfMl4=: 00:16:56.189 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:56.189 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:56.189 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ4MDIyMGNmYThhZmRmM2M2MThjMzBmNDEwMDQxNmRBpzNj: 00:16:56.189 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWRiNzZlZDZmMzc4OTIwNTk5MzI5OGZkNmViNzFkMzZmYWJmYjBmNTFlMTliMzc3NWEyM2EwODY5MzM0MzRlYVGfMl4=: ]] 00:16:56.189 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWRiNzZlZDZmMzc4OTIwNTk5MzI5OGZkNmViNzFkMzZmYWJmYjBmNTFlMTliMzc3NWEyM2EwODY5MzM0MzRlYVGfMl4=: 00:16:56.189 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:16:56.189 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:56.189 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:56.189 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:56.189 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:56.189 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:56.189 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:56.189 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.189 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.189 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.189 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:56.189 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:16:56.189 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:16:56.189 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:16:56.189 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:56.189 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:56.189 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:16:56.189 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:56.189 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:16:56.189 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:16:56.189 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:16:56.189 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.189 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.189 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.448 nvme0n1 00:16:56.448 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.448 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:56.448 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:56.448 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.448 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.448 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.709 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.709 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:56.709 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.709 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.709 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.709 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:56.709 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:16:56.709 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:56.709 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:56.709 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:56.709 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:56.709 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGJhYWMzY2VlNDM4NTY3Y2RjZTAwZDE2N2NhOTVlYjY5MzdiMTBjMThhNmQyYmY1YDYdQg==: 00:16:56.709 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzllMmVlNWFkNzRjMzAxMjE3MWUwZjI1MjAxMWUyNGEzYmUzNzQyN2I5MjQ3MGM0ZvkiHA==: 00:16:56.709 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:56.709 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:56.709 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGJhYWMzY2VlNDM4NTY3Y2RjZTAwZDE2N2NhOTVlYjY5MzdiMTBjMThhNmQyYmY1YDYdQg==: 00:16:56.709 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzllMmVlNWFkNzRjMzAxMjE3MWUwZjI1MjAxMWUyNGEzYmUzNzQyN2I5MjQ3MGM0ZvkiHA==: ]] 00:16:56.709 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzllMmVlNWFkNzRjMzAxMjE3MWUwZjI1MjAxMWUyNGEzYmUzNzQyN2I5MjQ3MGM0ZvkiHA==: 00:16:56.709 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:16:56.709 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:56.709 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:56.709 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:56.709 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:56.709 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:56.709 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:56.709 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.710 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.710 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.710 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:56.710 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:16:56.710 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:16:56.710 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:16:56.710 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:56.710 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:56.710 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:16:56.710 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:56.710 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:16:56.710 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:16:56.710 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:16:56.710 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.710 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.710 09:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.025 nvme0n1 00:16:57.025 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.026 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:57.026 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:57.026 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.026 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.026 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.026 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.026 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:57.026 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.026 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.026 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.026 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:57.026 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:16:57.026 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:57.026 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:57.026 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:57.026 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:57.026 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmQ3YmRkYmMyYTY0ZWNiNTIxYjhlYjAyMmM4MjAxZTYBRvFI: 00:16:57.026 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWJkOGNmZWE2MzQ2ZGRkMTdiZTI4YjcxMjAwZjdmNGQY+CMz: 00:16:57.026 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:57.026 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:57.026 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmQ3YmRkYmMyYTY0ZWNiNTIxYjhlYjAyMmM4MjAxZTYBRvFI: 00:16:57.026 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWJkOGNmZWE2MzQ2ZGRkMTdiZTI4YjcxMjAwZjdmNGQY+CMz: ]] 00:16:57.026 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWJkOGNmZWE2MzQ2ZGRkMTdiZTI4YjcxMjAwZjdmNGQY+CMz: 00:16:57.026 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:16:57.026 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:57.026 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:57.026 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:57.026 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:57.026 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:57.026 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:57.026 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.026 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.026 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.026 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:57.026 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:16:57.026 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:16:57.026 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:16:57.026 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:57.026 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:57.026 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:16:57.026 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:57.026 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:16:57.026 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:16:57.026 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:16:57.026 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.026 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.026 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.285 nvme0n1 00:16:57.285 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.285 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:57.285 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:57.285 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.285 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.285 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.285 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.285 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:57.285 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.285 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.285 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.285 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:57.285 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:16:57.285 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:57.285 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:57.285 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:57.285 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:57.285 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTRkN2MyN2I0NWI5YTU3Mzk1NDYyMzM2NTgwMzk0ZDNhNzc0NWQxNjI0ZmU2NjU1Z6QlMg==: 00:16:57.285 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU5YmY5MzlmYTZhNjYyNzliODRiOTE2YTQ4ODRiOWUxM3gX: 00:16:57.285 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:57.285 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:57.285 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTRkN2MyN2I0NWI5YTU3Mzk1NDYyMzM2NTgwMzk0ZDNhNzc0NWQxNjI0ZmU2NjU1Z6QlMg==: 00:16:57.285 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU5YmY5MzlmYTZhNjYyNzliODRiOTE2YTQ4ODRiOWUxM3gX: ]] 00:16:57.285 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU5YmY5MzlmYTZhNjYyNzliODRiOTE2YTQ4ODRiOWUxM3gX: 00:16:57.285 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:16:57.285 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:57.285 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:57.285 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:57.285 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:57.285 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:57.285 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:57.285 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.285 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.285 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.285 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:57.285 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:16:57.285 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:16:57.285 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:16:57.285 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:57.285 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:57.285 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:16:57.285 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:57.285 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:16:57.285 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:16:57.285 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:16:57.285 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:57.285 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.285 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.853 nvme0n1 00:16:57.853 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.853 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:57.853 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:57.853 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.853 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.853 09:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.853 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.853 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:57.853 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.853 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.853 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.853 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:57.853 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:16:57.853 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:57.853 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:57.853 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:57.853 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:57.853 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmVjOTc2MDUzNzc2MjMyMzVlYjJkN2Q3OTJlZjA5NTE0YTM0ZjkwZGExNzc0Y2RlOThmODUwMDgzNDIxZTBiZMFP5uY=: 00:16:57.853 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:57.853 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:57.853 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:57.853 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmVjOTc2MDUzNzc2MjMyMzVlYjJkN2Q3OTJlZjA5NTE0YTM0ZjkwZGExNzc0Y2RlOThmODUwMDgzNDIxZTBiZMFP5uY=: 00:16:57.853 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:57.853 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:16:57.853 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:57.853 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:57.853 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:57.853 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:57.853 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:57.853 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:57.853 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.853 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.853 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.853 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:57.853 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:16:57.853 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:16:57.853 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:16:57.853 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:57.853 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:57.853 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:16:57.853 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:57.853 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:16:57.853 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:16:57.853 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:16:57.853 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:57.853 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.853 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.113 nvme0n1 00:16:58.113 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.113 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:58.113 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:58.113 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.113 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.113 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.113 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.113 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:58.113 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.113 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.113 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.113 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:58.113 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:58.113 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:16:58.113 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:58.113 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:58.113 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:58.113 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:58.113 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ4MDIyMGNmYThhZmRmM2M2MThjMzBmNDEwMDQxNmRBpzNj: 00:16:58.113 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWRiNzZlZDZmMzc4OTIwNTk5MzI5OGZkNmViNzFkMzZmYWJmYjBmNTFlMTliMzc3NWEyM2EwODY5MzM0MzRlYVGfMl4=: 00:16:58.113 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:58.113 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:58.113 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ4MDIyMGNmYThhZmRmM2M2MThjMzBmNDEwMDQxNmRBpzNj: 00:16:58.113 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWRiNzZlZDZmMzc4OTIwNTk5MzI5OGZkNmViNzFkMzZmYWJmYjBmNTFlMTliMzc3NWEyM2EwODY5MzM0MzRlYVGfMl4=: ]] 00:16:58.113 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWRiNzZlZDZmMzc4OTIwNTk5MzI5OGZkNmViNzFkMzZmYWJmYjBmNTFlMTliMzc3NWEyM2EwODY5MzM0MzRlYVGfMl4=: 00:16:58.113 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:16:58.113 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:58.113 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:58.113 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:58.113 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:58.113 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:58.113 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:58.113 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.113 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.113 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.113 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:58.113 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:16:58.113 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:16:58.113 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:16:58.113 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:58.113 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:58.113 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:16:58.113 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:58.113 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:16:58.113 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:16:58.113 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:16:58.113 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.113 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.113 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.681 nvme0n1 00:16:58.681 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.681 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:58.681 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.681 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:58.681 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.681 09:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.681 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.681 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:58.681 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.681 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.681 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.681 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:58.681 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:16:58.681 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:58.681 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:58.681 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:58.681 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:58.681 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGJhYWMzY2VlNDM4NTY3Y2RjZTAwZDE2N2NhOTVlYjY5MzdiMTBjMThhNmQyYmY1YDYdQg==: 00:16:58.681 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzllMmVlNWFkNzRjMzAxMjE3MWUwZjI1MjAxMWUyNGEzYmUzNzQyN2I5MjQ3MGM0ZvkiHA==: 00:16:58.681 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:58.681 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:58.681 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGJhYWMzY2VlNDM4NTY3Y2RjZTAwZDE2N2NhOTVlYjY5MzdiMTBjMThhNmQyYmY1YDYdQg==: 00:16:58.681 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzllMmVlNWFkNzRjMzAxMjE3MWUwZjI1MjAxMWUyNGEzYmUzNzQyN2I5MjQ3MGM0ZvkiHA==: ]] 00:16:58.681 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzllMmVlNWFkNzRjMzAxMjE3MWUwZjI1MjAxMWUyNGEzYmUzNzQyN2I5MjQ3MGM0ZvkiHA==: 00:16:58.681 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:16:58.681 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:58.681 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:58.681 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:58.681 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:58.681 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:58.681 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:58.681 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.681 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.681 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.681 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:58.681 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:16:58.681 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:16:58.681 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:16:58.681 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:58.681 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:58.681 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:16:58.681 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:58.681 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:16:58.681 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:16:58.681 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:16:58.681 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.681 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.681 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.248 nvme0n1 00:16:59.248 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.248 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:59.248 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:59.248 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.248 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.249 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.249 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.249 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:59.249 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.249 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.249 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.249 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:59.249 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:16:59.249 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:59.249 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:59.249 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:59.249 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:59.249 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmQ3YmRkYmMyYTY0ZWNiNTIxYjhlYjAyMmM4MjAxZTYBRvFI: 00:16:59.249 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWJkOGNmZWE2MzQ2ZGRkMTdiZTI4YjcxMjAwZjdmNGQY+CMz: 00:16:59.249 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:59.249 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:59.249 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmQ3YmRkYmMyYTY0ZWNiNTIxYjhlYjAyMmM4MjAxZTYBRvFI: 00:16:59.249 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWJkOGNmZWE2MzQ2ZGRkMTdiZTI4YjcxMjAwZjdmNGQY+CMz: ]] 00:16:59.249 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWJkOGNmZWE2MzQ2ZGRkMTdiZTI4YjcxMjAwZjdmNGQY+CMz: 00:16:59.249 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:16:59.249 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:59.249 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:59.249 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:59.249 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:59.249 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:59.508 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:59.508 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.508 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.508 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.508 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:59.508 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:16:59.508 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:16:59.508 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:16:59.508 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:59.508 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:59.508 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:16:59.508 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:59.508 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:16:59.508 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:16:59.508 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:16:59.508 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.508 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.508 09:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.078 nvme0n1 00:17:00.078 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.078 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:00.078 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:00.078 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.078 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.078 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.078 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.078 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:00.078 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.078 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.078 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.078 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:00.078 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:17:00.078 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:00.078 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:00.078 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:00.078 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:00.078 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTRkN2MyN2I0NWI5YTU3Mzk1NDYyMzM2NTgwMzk0ZDNhNzc0NWQxNjI0ZmU2NjU1Z6QlMg==: 00:17:00.078 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU5YmY5MzlmYTZhNjYyNzliODRiOTE2YTQ4ODRiOWUxM3gX: 00:17:00.078 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:00.078 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:00.078 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTRkN2MyN2I0NWI5YTU3Mzk1NDYyMzM2NTgwMzk0ZDNhNzc0NWQxNjI0ZmU2NjU1Z6QlMg==: 00:17:00.078 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU5YmY5MzlmYTZhNjYyNzliODRiOTE2YTQ4ODRiOWUxM3gX: ]] 00:17:00.078 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU5YmY5MzlmYTZhNjYyNzliODRiOTE2YTQ4ODRiOWUxM3gX: 00:17:00.078 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:17:00.078 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:00.078 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:00.078 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:00.078 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:00.078 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:00.078 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:00.078 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.078 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.078 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.078 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:00.078 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:00.078 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:00.078 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:00.078 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:00.078 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:00.078 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:00.078 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:00.078 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:00.078 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:00.078 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:00.078 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:00.078 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.078 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.645 nvme0n1 00:17:00.645 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.645 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:00.645 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:00.645 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.645 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.645 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.645 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.645 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:00.645 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.645 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.646 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.646 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:00.646 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:17:00.646 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:00.646 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:00.646 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:00.646 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:00.646 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmVjOTc2MDUzNzc2MjMyMzVlYjJkN2Q3OTJlZjA5NTE0YTM0ZjkwZGExNzc0Y2RlOThmODUwMDgzNDIxZTBiZMFP5uY=: 00:17:00.646 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:00.646 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:00.646 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:00.646 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmVjOTc2MDUzNzc2MjMyMzVlYjJkN2Q3OTJlZjA5NTE0YTM0ZjkwZGExNzc0Y2RlOThmODUwMDgzNDIxZTBiZMFP5uY=: 00:17:00.646 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:00.646 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:17:00.646 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:00.646 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:00.646 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:00.646 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:00.646 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:00.646 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:00.646 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.646 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.646 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.646 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:00.646 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:00.646 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:00.646 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:00.646 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:00.646 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:00.646 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:00.646 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:00.646 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:00.646 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:00.646 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:00.646 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:00.646 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.646 09:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.213 nvme0n1 00:17:01.213 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.213 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:01.213 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:01.213 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.213 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.213 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.213 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.213 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:01.213 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.213 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.213 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.213 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:01.213 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:01.213 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:01.213 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:17:01.213 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:01.213 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:01.213 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:01.213 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:01.213 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ4MDIyMGNmYThhZmRmM2M2MThjMzBmNDEwMDQxNmRBpzNj: 00:17:01.213 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWRiNzZlZDZmMzc4OTIwNTk5MzI5OGZkNmViNzFkMzZmYWJmYjBmNTFlMTliMzc3NWEyM2EwODY5MzM0MzRlYVGfMl4=: 00:17:01.213 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:01.213 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:01.213 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ4MDIyMGNmYThhZmRmM2M2MThjMzBmNDEwMDQxNmRBpzNj: 00:17:01.213 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWRiNzZlZDZmMzc4OTIwNTk5MzI5OGZkNmViNzFkMzZmYWJmYjBmNTFlMTliMzc3NWEyM2EwODY5MzM0MzRlYVGfMl4=: ]] 00:17:01.213 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWRiNzZlZDZmMzc4OTIwNTk5MzI5OGZkNmViNzFkMzZmYWJmYjBmNTFlMTliMzc3NWEyM2EwODY5MzM0MzRlYVGfMl4=: 00:17:01.213 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:17:01.213 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:01.213 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:01.213 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:01.213 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:01.213 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:01.213 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:01.213 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.213 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.213 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.213 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:01.213 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:01.213 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:01.213 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:01.213 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:01.213 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:01.213 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:01.213 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:01.213 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:01.213 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:01.213 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:01.213 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.213 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.213 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.213 nvme0n1 00:17:01.213 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.213 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:01.213 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:01.213 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.213 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.213 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.472 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.472 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:01.472 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.472 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.472 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.472 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:01.472 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGJhYWMzY2VlNDM4NTY3Y2RjZTAwZDE2N2NhOTVlYjY5MzdiMTBjMThhNmQyYmY1YDYdQg==: 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzllMmVlNWFkNzRjMzAxMjE3MWUwZjI1MjAxMWUyNGEzYmUzNzQyN2I5MjQ3MGM0ZvkiHA==: 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGJhYWMzY2VlNDM4NTY3Y2RjZTAwZDE2N2NhOTVlYjY5MzdiMTBjMThhNmQyYmY1YDYdQg==: 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzllMmVlNWFkNzRjMzAxMjE3MWUwZjI1MjAxMWUyNGEzYmUzNzQyN2I5MjQ3MGM0ZvkiHA==: ]] 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzllMmVlNWFkNzRjMzAxMjE3MWUwZjI1MjAxMWUyNGEzYmUzNzQyN2I5MjQ3MGM0ZvkiHA==: 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.473 nvme0n1 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmQ3YmRkYmMyYTY0ZWNiNTIxYjhlYjAyMmM4MjAxZTYBRvFI: 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWJkOGNmZWE2MzQ2ZGRkMTdiZTI4YjcxMjAwZjdmNGQY+CMz: 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmQ3YmRkYmMyYTY0ZWNiNTIxYjhlYjAyMmM4MjAxZTYBRvFI: 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWJkOGNmZWE2MzQ2ZGRkMTdiZTI4YjcxMjAwZjdmNGQY+CMz: ]] 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWJkOGNmZWE2MzQ2ZGRkMTdiZTI4YjcxMjAwZjdmNGQY+CMz: 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.473 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.732 nvme0n1 00:17:01.732 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.732 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:01.732 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:01.732 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.732 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.732 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.732 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.732 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:01.732 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.732 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.732 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.732 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:01.732 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:17:01.732 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:01.732 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:01.732 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:01.732 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:01.732 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTRkN2MyN2I0NWI5YTU3Mzk1NDYyMzM2NTgwMzk0ZDNhNzc0NWQxNjI0ZmU2NjU1Z6QlMg==: 00:17:01.732 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU5YmY5MzlmYTZhNjYyNzliODRiOTE2YTQ4ODRiOWUxM3gX: 00:17:01.732 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:01.732 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:01.732 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTRkN2MyN2I0NWI5YTU3Mzk1NDYyMzM2NTgwMzk0ZDNhNzc0NWQxNjI0ZmU2NjU1Z6QlMg==: 00:17:01.732 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU5YmY5MzlmYTZhNjYyNzliODRiOTE2YTQ4ODRiOWUxM3gX: ]] 00:17:01.732 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU5YmY5MzlmYTZhNjYyNzliODRiOTE2YTQ4ODRiOWUxM3gX: 00:17:01.732 09:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:17:01.732 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:01.732 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:01.732 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:01.732 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:01.732 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:01.732 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:01.732 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.732 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.732 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.732 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:01.732 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:01.732 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:01.732 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:01.732 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:01.732 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:01.732 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:01.732 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:01.732 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:01.732 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:01.732 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:01.732 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:01.732 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.732 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.732 nvme0n1 00:17:01.733 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.733 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:01.733 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:01.733 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.733 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.733 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.991 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.991 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:01.991 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.991 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.991 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.991 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:01.991 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:17:01.991 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:01.991 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:01.991 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:01.991 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmVjOTc2MDUzNzc2MjMyMzVlYjJkN2Q3OTJlZjA5NTE0YTM0ZjkwZGExNzc0Y2RlOThmODUwMDgzNDIxZTBiZMFP5uY=: 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmVjOTc2MDUzNzc2MjMyMzVlYjJkN2Q3OTJlZjA5NTE0YTM0ZjkwZGExNzc0Y2RlOThmODUwMDgzNDIxZTBiZMFP5uY=: 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.992 nvme0n1 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ4MDIyMGNmYThhZmRmM2M2MThjMzBmNDEwMDQxNmRBpzNj: 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWRiNzZlZDZmMzc4OTIwNTk5MzI5OGZkNmViNzFkMzZmYWJmYjBmNTFlMTliMzc3NWEyM2EwODY5MzM0MzRlYVGfMl4=: 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ4MDIyMGNmYThhZmRmM2M2MThjMzBmNDEwMDQxNmRBpzNj: 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWRiNzZlZDZmMzc4OTIwNTk5MzI5OGZkNmViNzFkMzZmYWJmYjBmNTFlMTliMzc3NWEyM2EwODY5MzM0MzRlYVGfMl4=: ]] 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWRiNzZlZDZmMzc4OTIwNTk5MzI5OGZkNmViNzFkMzZmYWJmYjBmNTFlMTliMzc3NWEyM2EwODY5MzM0MzRlYVGfMl4=: 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.992 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.251 nvme0n1 00:17:02.251 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.251 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:02.251 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:02.251 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.251 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.251 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.251 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.251 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:02.251 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.251 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.251 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.251 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:02.251 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:17:02.251 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:02.251 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:02.251 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:02.251 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:02.251 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGJhYWMzY2VlNDM4NTY3Y2RjZTAwZDE2N2NhOTVlYjY5MzdiMTBjMThhNmQyYmY1YDYdQg==: 00:17:02.251 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzllMmVlNWFkNzRjMzAxMjE3MWUwZjI1MjAxMWUyNGEzYmUzNzQyN2I5MjQ3MGM0ZvkiHA==: 00:17:02.251 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:02.251 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:02.251 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGJhYWMzY2VlNDM4NTY3Y2RjZTAwZDE2N2NhOTVlYjY5MzdiMTBjMThhNmQyYmY1YDYdQg==: 00:17:02.251 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzllMmVlNWFkNzRjMzAxMjE3MWUwZjI1MjAxMWUyNGEzYmUzNzQyN2I5MjQ3MGM0ZvkiHA==: ]] 00:17:02.251 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzllMmVlNWFkNzRjMzAxMjE3MWUwZjI1MjAxMWUyNGEzYmUzNzQyN2I5MjQ3MGM0ZvkiHA==: 00:17:02.251 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:17:02.251 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:02.251 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:02.251 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:02.251 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:02.251 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:02.251 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:02.251 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.251 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.251 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.251 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:02.251 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:02.251 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:02.251 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:02.251 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:02.251 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:02.251 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:02.251 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:02.251 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:02.251 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:02.251 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:02.251 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.251 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.251 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.510 nvme0n1 00:17:02.510 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.510 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:02.510 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:02.510 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.510 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.510 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.510 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.510 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:02.510 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.510 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.510 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.510 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:02.510 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:17:02.510 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:02.510 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:02.510 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:02.510 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:02.510 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmQ3YmRkYmMyYTY0ZWNiNTIxYjhlYjAyMmM4MjAxZTYBRvFI: 00:17:02.510 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWJkOGNmZWE2MzQ2ZGRkMTdiZTI4YjcxMjAwZjdmNGQY+CMz: 00:17:02.510 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:02.510 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:02.510 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmQ3YmRkYmMyYTY0ZWNiNTIxYjhlYjAyMmM4MjAxZTYBRvFI: 00:17:02.510 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWJkOGNmZWE2MzQ2ZGRkMTdiZTI4YjcxMjAwZjdmNGQY+CMz: ]] 00:17:02.510 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWJkOGNmZWE2MzQ2ZGRkMTdiZTI4YjcxMjAwZjdmNGQY+CMz: 00:17:02.510 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:17:02.510 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:02.510 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:02.510 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:02.510 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:02.510 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:02.510 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:02.510 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.510 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.510 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.510 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:02.510 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:02.510 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:02.510 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:02.510 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:02.510 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:02.510 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:02.510 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:02.510 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:02.510 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:02.510 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:02.510 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.510 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.510 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.769 nvme0n1 00:17:02.769 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.769 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:02.769 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.769 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:02.769 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.769 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.769 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.769 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:02.769 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.769 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.769 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.769 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:02.769 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:17:02.769 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:02.769 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:02.769 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:02.769 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:02.769 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTRkN2MyN2I0NWI5YTU3Mzk1NDYyMzM2NTgwMzk0ZDNhNzc0NWQxNjI0ZmU2NjU1Z6QlMg==: 00:17:02.769 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU5YmY5MzlmYTZhNjYyNzliODRiOTE2YTQ4ODRiOWUxM3gX: 00:17:02.769 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:02.769 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:02.769 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTRkN2MyN2I0NWI5YTU3Mzk1NDYyMzM2NTgwMzk0ZDNhNzc0NWQxNjI0ZmU2NjU1Z6QlMg==: 00:17:02.769 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU5YmY5MzlmYTZhNjYyNzliODRiOTE2YTQ4ODRiOWUxM3gX: ]] 00:17:02.769 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU5YmY5MzlmYTZhNjYyNzliODRiOTE2YTQ4ODRiOWUxM3gX: 00:17:02.769 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:17:02.769 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:02.769 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:02.769 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:02.769 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:02.769 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:02.769 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:02.769 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.769 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.769 09:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.769 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:02.769 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:02.770 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:02.770 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:02.770 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:02.770 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:02.770 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:02.770 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:02.770 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:02.770 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:02.770 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:02.770 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:02.770 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.770 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.770 nvme0n1 00:17:02.770 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.770 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:02.770 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.770 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:02.770 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.770 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmVjOTc2MDUzNzc2MjMyMzVlYjJkN2Q3OTJlZjA5NTE0YTM0ZjkwZGExNzc0Y2RlOThmODUwMDgzNDIxZTBiZMFP5uY=: 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmVjOTc2MDUzNzc2MjMyMzVlYjJkN2Q3OTJlZjA5NTE0YTM0ZjkwZGExNzc0Y2RlOThmODUwMDgzNDIxZTBiZMFP5uY=: 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.028 nvme0n1 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ4MDIyMGNmYThhZmRmM2M2MThjMzBmNDEwMDQxNmRBpzNj: 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWRiNzZlZDZmMzc4OTIwNTk5MzI5OGZkNmViNzFkMzZmYWJmYjBmNTFlMTliMzc3NWEyM2EwODY5MzM0MzRlYVGfMl4=: 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ4MDIyMGNmYThhZmRmM2M2MThjMzBmNDEwMDQxNmRBpzNj: 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWRiNzZlZDZmMzc4OTIwNTk5MzI5OGZkNmViNzFkMzZmYWJmYjBmNTFlMTliMzc3NWEyM2EwODY5MzM0MzRlYVGfMl4=: ]] 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWRiNzZlZDZmMzc4OTIwNTk5MzI5OGZkNmViNzFkMzZmYWJmYjBmNTFlMTliMzc3NWEyM2EwODY5MzM0MzRlYVGfMl4=: 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:03.028 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:03.029 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:03.029 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.029 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.029 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.029 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:03.029 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:03.029 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:03.029 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:03.029 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:03.029 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:03.029 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:03.029 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:03.029 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:03.029 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:03.029 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:03.029 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.029 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.029 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.286 nvme0n1 00:17:03.286 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.286 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:03.286 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.286 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.286 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:03.286 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.286 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.286 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:03.286 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.286 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.286 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.286 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:03.286 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:17:03.286 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:03.286 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:03.286 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:03.286 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:03.286 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGJhYWMzY2VlNDM4NTY3Y2RjZTAwZDE2N2NhOTVlYjY5MzdiMTBjMThhNmQyYmY1YDYdQg==: 00:17:03.286 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzllMmVlNWFkNzRjMzAxMjE3MWUwZjI1MjAxMWUyNGEzYmUzNzQyN2I5MjQ3MGM0ZvkiHA==: 00:17:03.286 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:03.286 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:03.286 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGJhYWMzY2VlNDM4NTY3Y2RjZTAwZDE2N2NhOTVlYjY5MzdiMTBjMThhNmQyYmY1YDYdQg==: 00:17:03.286 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzllMmVlNWFkNzRjMzAxMjE3MWUwZjI1MjAxMWUyNGEzYmUzNzQyN2I5MjQ3MGM0ZvkiHA==: ]] 00:17:03.286 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzllMmVlNWFkNzRjMzAxMjE3MWUwZjI1MjAxMWUyNGEzYmUzNzQyN2I5MjQ3MGM0ZvkiHA==: 00:17:03.287 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:17:03.287 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:03.287 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:03.287 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:03.287 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:03.287 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:03.287 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:03.287 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.287 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.287 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.287 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:03.287 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:03.287 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:03.287 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:03.287 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:03.287 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:03.287 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:03.287 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:03.287 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:03.287 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:03.287 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:03.287 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.287 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.287 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.545 nvme0n1 00:17:03.545 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.545 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:03.545 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.545 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.545 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:03.545 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.545 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.545 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:03.545 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.545 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.545 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.545 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:03.545 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:17:03.545 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:03.545 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:03.545 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:03.545 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:03.545 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmQ3YmRkYmMyYTY0ZWNiNTIxYjhlYjAyMmM4MjAxZTYBRvFI: 00:17:03.545 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWJkOGNmZWE2MzQ2ZGRkMTdiZTI4YjcxMjAwZjdmNGQY+CMz: 00:17:03.545 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:03.545 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:03.545 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmQ3YmRkYmMyYTY0ZWNiNTIxYjhlYjAyMmM4MjAxZTYBRvFI: 00:17:03.545 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWJkOGNmZWE2MzQ2ZGRkMTdiZTI4YjcxMjAwZjdmNGQY+CMz: ]] 00:17:03.545 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWJkOGNmZWE2MzQ2ZGRkMTdiZTI4YjcxMjAwZjdmNGQY+CMz: 00:17:03.545 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:17:03.545 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:03.545 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:03.545 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:03.545 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:03.545 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:03.545 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:03.545 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.545 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.545 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.545 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:03.545 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:03.545 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:03.545 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:03.545 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:03.545 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:03.545 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:03.545 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:03.545 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:03.545 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:03.545 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:03.545 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.545 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.545 09:31:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.825 nvme0n1 00:17:03.825 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.825 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:03.825 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.825 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:03.825 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.825 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.825 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.825 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:03.825 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.825 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.825 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.825 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:03.825 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:17:03.826 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:03.826 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:03.826 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:03.826 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:03.826 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTRkN2MyN2I0NWI5YTU3Mzk1NDYyMzM2NTgwMzk0ZDNhNzc0NWQxNjI0ZmU2NjU1Z6QlMg==: 00:17:03.826 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU5YmY5MzlmYTZhNjYyNzliODRiOTE2YTQ4ODRiOWUxM3gX: 00:17:03.826 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:03.826 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:03.826 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTRkN2MyN2I0NWI5YTU3Mzk1NDYyMzM2NTgwMzk0ZDNhNzc0NWQxNjI0ZmU2NjU1Z6QlMg==: 00:17:03.826 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU5YmY5MzlmYTZhNjYyNzliODRiOTE2YTQ4ODRiOWUxM3gX: ]] 00:17:03.826 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU5YmY5MzlmYTZhNjYyNzliODRiOTE2YTQ4ODRiOWUxM3gX: 00:17:03.826 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:17:03.826 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:03.826 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:03.826 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:03.826 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:03.826 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:03.826 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:03.826 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.826 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.826 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.826 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:03.826 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:03.826 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:03.826 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:03.826 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:03.826 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:03.826 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:03.826 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:03.826 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:03.826 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:03.826 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:03.826 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:03.826 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.826 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.097 nvme0n1 00:17:04.097 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.097 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:04.097 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.097 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:04.097 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.097 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.097 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.097 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:04.097 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.097 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.097 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.097 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:04.097 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:17:04.097 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:04.097 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:04.097 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:04.097 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:04.097 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmVjOTc2MDUzNzc2MjMyMzVlYjJkN2Q3OTJlZjA5NTE0YTM0ZjkwZGExNzc0Y2RlOThmODUwMDgzNDIxZTBiZMFP5uY=: 00:17:04.097 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:04.097 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:04.097 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:04.097 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmVjOTc2MDUzNzc2MjMyMzVlYjJkN2Q3OTJlZjA5NTE0YTM0ZjkwZGExNzc0Y2RlOThmODUwMDgzNDIxZTBiZMFP5uY=: 00:17:04.097 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:04.097 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:17:04.097 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:04.097 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:04.097 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:04.097 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:04.097 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:04.097 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:04.097 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.097 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.097 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.097 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:04.097 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:04.097 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:04.097 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:04.097 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:04.097 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:04.097 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:04.097 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:04.097 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:04.097 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:04.097 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:04.097 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:04.097 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.097 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.356 nvme0n1 00:17:04.356 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.356 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:04.356 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:04.356 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.356 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.356 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.356 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.356 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:04.356 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.356 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.356 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.356 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:04.356 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:04.357 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:17:04.357 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:04.357 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:04.357 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:04.357 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:04.357 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ4MDIyMGNmYThhZmRmM2M2MThjMzBmNDEwMDQxNmRBpzNj: 00:17:04.357 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWRiNzZlZDZmMzc4OTIwNTk5MzI5OGZkNmViNzFkMzZmYWJmYjBmNTFlMTliMzc3NWEyM2EwODY5MzM0MzRlYVGfMl4=: 00:17:04.357 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:04.357 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:04.357 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ4MDIyMGNmYThhZmRmM2M2MThjMzBmNDEwMDQxNmRBpzNj: 00:17:04.357 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWRiNzZlZDZmMzc4OTIwNTk5MzI5OGZkNmViNzFkMzZmYWJmYjBmNTFlMTliMzc3NWEyM2EwODY5MzM0MzRlYVGfMl4=: ]] 00:17:04.357 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWRiNzZlZDZmMzc4OTIwNTk5MzI5OGZkNmViNzFkMzZmYWJmYjBmNTFlMTliMzc3NWEyM2EwODY5MzM0MzRlYVGfMl4=: 00:17:04.357 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:17:04.357 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:04.357 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:04.357 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:04.357 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:04.357 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:04.357 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:04.357 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.357 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.357 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.357 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:04.357 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:04.357 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:04.357 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:04.357 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:04.357 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:04.357 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:04.357 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:04.357 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:04.357 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:04.357 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:04.357 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.357 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.357 09:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.925 nvme0n1 00:17:04.925 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.925 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:04.925 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.925 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:04.925 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.925 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.925 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.925 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:04.925 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.925 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.925 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.925 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:04.925 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:17:04.925 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:04.925 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:04.925 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:04.925 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:04.925 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGJhYWMzY2VlNDM4NTY3Y2RjZTAwZDE2N2NhOTVlYjY5MzdiMTBjMThhNmQyYmY1YDYdQg==: 00:17:04.925 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzllMmVlNWFkNzRjMzAxMjE3MWUwZjI1MjAxMWUyNGEzYmUzNzQyN2I5MjQ3MGM0ZvkiHA==: 00:17:04.925 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:04.925 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:04.925 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGJhYWMzY2VlNDM4NTY3Y2RjZTAwZDE2N2NhOTVlYjY5MzdiMTBjMThhNmQyYmY1YDYdQg==: 00:17:04.925 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzllMmVlNWFkNzRjMzAxMjE3MWUwZjI1MjAxMWUyNGEzYmUzNzQyN2I5MjQ3MGM0ZvkiHA==: ]] 00:17:04.925 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzllMmVlNWFkNzRjMzAxMjE3MWUwZjI1MjAxMWUyNGEzYmUzNzQyN2I5MjQ3MGM0ZvkiHA==: 00:17:04.925 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:17:04.925 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:04.925 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:04.925 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:04.925 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:04.925 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:04.925 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:04.925 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.925 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.925 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.925 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:04.925 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:04.925 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:04.925 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:04.925 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:04.925 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:04.925 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:04.925 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:04.925 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:04.925 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:04.925 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:04.925 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.925 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.925 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.185 nvme0n1 00:17:05.185 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.185 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:05.185 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.185 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.185 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:05.185 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.185 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.185 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:05.185 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.185 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.185 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.185 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:05.185 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:17:05.185 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:05.185 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:05.185 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:05.185 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:05.185 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmQ3YmRkYmMyYTY0ZWNiNTIxYjhlYjAyMmM4MjAxZTYBRvFI: 00:17:05.185 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWJkOGNmZWE2MzQ2ZGRkMTdiZTI4YjcxMjAwZjdmNGQY+CMz: 00:17:05.185 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:05.185 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:05.185 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmQ3YmRkYmMyYTY0ZWNiNTIxYjhlYjAyMmM4MjAxZTYBRvFI: 00:17:05.185 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWJkOGNmZWE2MzQ2ZGRkMTdiZTI4YjcxMjAwZjdmNGQY+CMz: ]] 00:17:05.185 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWJkOGNmZWE2MzQ2ZGRkMTdiZTI4YjcxMjAwZjdmNGQY+CMz: 00:17:05.185 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:17:05.185 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:05.185 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:05.185 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:05.185 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:05.185 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:05.185 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:05.185 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.185 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.185 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.185 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:05.185 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:05.185 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:05.185 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:05.185 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:05.185 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:05.185 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:05.185 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:05.185 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:05.185 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:05.185 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:05.185 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.185 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.185 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.444 nvme0n1 00:17:05.444 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.444 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:05.444 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:05.444 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.444 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.703 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.703 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.703 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:05.703 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.703 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.703 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.703 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:05.703 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:17:05.703 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:05.703 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:05.703 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:05.703 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:05.703 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTRkN2MyN2I0NWI5YTU3Mzk1NDYyMzM2NTgwMzk0ZDNhNzc0NWQxNjI0ZmU2NjU1Z6QlMg==: 00:17:05.703 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU5YmY5MzlmYTZhNjYyNzliODRiOTE2YTQ4ODRiOWUxM3gX: 00:17:05.703 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:05.703 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:05.703 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTRkN2MyN2I0NWI5YTU3Mzk1NDYyMzM2NTgwMzk0ZDNhNzc0NWQxNjI0ZmU2NjU1Z6QlMg==: 00:17:05.703 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU5YmY5MzlmYTZhNjYyNzliODRiOTE2YTQ4ODRiOWUxM3gX: ]] 00:17:05.703 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU5YmY5MzlmYTZhNjYyNzliODRiOTE2YTQ4ODRiOWUxM3gX: 00:17:05.703 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:17:05.703 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:05.703 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:05.703 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:05.703 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:05.703 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:05.703 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:05.703 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.703 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.703 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.703 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:05.703 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:05.703 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:05.703 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:05.703 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:05.703 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:05.703 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:05.703 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:05.703 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:05.703 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:05.703 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:05.703 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:05.703 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.703 09:31:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.962 nvme0n1 00:17:05.962 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.962 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:05.962 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:05.962 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.962 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.962 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.962 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.962 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:05.962 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.962 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.962 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.962 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:05.962 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:17:05.962 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:05.962 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:05.962 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:05.962 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:05.962 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmVjOTc2MDUzNzc2MjMyMzVlYjJkN2Q3OTJlZjA5NTE0YTM0ZjkwZGExNzc0Y2RlOThmODUwMDgzNDIxZTBiZMFP5uY=: 00:17:05.962 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:05.962 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:05.962 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:05.962 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmVjOTc2MDUzNzc2MjMyMzVlYjJkN2Q3OTJlZjA5NTE0YTM0ZjkwZGExNzc0Y2RlOThmODUwMDgzNDIxZTBiZMFP5uY=: 00:17:05.962 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:05.962 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:17:05.962 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:05.962 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:05.962 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:05.962 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:05.962 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:05.962 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:05.962 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.963 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.963 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.963 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:05.963 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:05.963 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:05.963 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:05.963 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:05.963 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:05.963 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:05.963 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:05.963 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:05.963 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:05.963 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:05.963 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:05.963 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.963 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.221 nvme0n1 00:17:06.221 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.221 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:06.221 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:06.221 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.221 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.480 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.480 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.480 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:06.480 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.480 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.480 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.480 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:06.480 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:06.480 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:17:06.480 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:06.480 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:06.480 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:06.480 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:06.480 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ4MDIyMGNmYThhZmRmM2M2MThjMzBmNDEwMDQxNmRBpzNj: 00:17:06.480 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWRiNzZlZDZmMzc4OTIwNTk5MzI5OGZkNmViNzFkMzZmYWJmYjBmNTFlMTliMzc3NWEyM2EwODY5MzM0MzRlYVGfMl4=: 00:17:06.480 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:06.480 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:06.480 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ4MDIyMGNmYThhZmRmM2M2MThjMzBmNDEwMDQxNmRBpzNj: 00:17:06.480 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWRiNzZlZDZmMzc4OTIwNTk5MzI5OGZkNmViNzFkMzZmYWJmYjBmNTFlMTliMzc3NWEyM2EwODY5MzM0MzRlYVGfMl4=: ]] 00:17:06.480 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWRiNzZlZDZmMzc4OTIwNTk5MzI5OGZkNmViNzFkMzZmYWJmYjBmNTFlMTliMzc3NWEyM2EwODY5MzM0MzRlYVGfMl4=: 00:17:06.480 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:17:06.480 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:06.480 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:06.480 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:06.480 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:06.480 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:06.480 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:06.480 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.480 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.480 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.480 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:06.480 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:06.480 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:06.480 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:06.480 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:06.480 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:06.480 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:06.480 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:06.480 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:06.480 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:06.480 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:06.481 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.481 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.481 09:31:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.048 nvme0n1 00:17:07.048 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.048 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:07.048 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:07.048 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.048 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.048 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.048 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.048 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:07.048 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.048 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.048 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.048 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:07.048 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:17:07.048 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:07.048 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:07.048 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:07.048 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:07.048 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGJhYWMzY2VlNDM4NTY3Y2RjZTAwZDE2N2NhOTVlYjY5MzdiMTBjMThhNmQyYmY1YDYdQg==: 00:17:07.048 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzllMmVlNWFkNzRjMzAxMjE3MWUwZjI1MjAxMWUyNGEzYmUzNzQyN2I5MjQ3MGM0ZvkiHA==: 00:17:07.048 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:07.048 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:07.048 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGJhYWMzY2VlNDM4NTY3Y2RjZTAwZDE2N2NhOTVlYjY5MzdiMTBjMThhNmQyYmY1YDYdQg==: 00:17:07.048 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzllMmVlNWFkNzRjMzAxMjE3MWUwZjI1MjAxMWUyNGEzYmUzNzQyN2I5MjQ3MGM0ZvkiHA==: ]] 00:17:07.048 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzllMmVlNWFkNzRjMzAxMjE3MWUwZjI1MjAxMWUyNGEzYmUzNzQyN2I5MjQ3MGM0ZvkiHA==: 00:17:07.048 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:17:07.048 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:07.048 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:07.048 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:07.048 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:07.048 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:07.048 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:07.048 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.048 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.048 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.048 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:07.048 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:07.048 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:07.048 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:07.048 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:07.048 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:07.048 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:07.048 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:07.048 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:07.048 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:07.048 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:07.048 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.048 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.048 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.615 nvme0n1 00:17:07.615 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.615 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:07.615 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:07.615 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.615 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.615 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.615 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.615 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:07.615 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.615 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.615 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.615 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:07.615 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:17:07.615 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:07.615 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:07.615 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:07.615 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:07.615 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmQ3YmRkYmMyYTY0ZWNiNTIxYjhlYjAyMmM4MjAxZTYBRvFI: 00:17:07.615 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWJkOGNmZWE2MzQ2ZGRkMTdiZTI4YjcxMjAwZjdmNGQY+CMz: 00:17:07.615 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:07.615 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:07.615 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmQ3YmRkYmMyYTY0ZWNiNTIxYjhlYjAyMmM4MjAxZTYBRvFI: 00:17:07.615 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWJkOGNmZWE2MzQ2ZGRkMTdiZTI4YjcxMjAwZjdmNGQY+CMz: ]] 00:17:07.615 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWJkOGNmZWE2MzQ2ZGRkMTdiZTI4YjcxMjAwZjdmNGQY+CMz: 00:17:07.615 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:17:07.615 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:07.615 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:07.615 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:07.616 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:07.616 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:07.616 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:07.616 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.616 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.616 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.616 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:07.616 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:07.616 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:07.616 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:07.616 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:07.616 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:07.616 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:07.616 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:07.616 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:07.616 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:07.616 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:07.616 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.616 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.616 09:31:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.183 nvme0n1 00:17:08.183 09:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.183 09:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:08.183 09:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.183 09:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.183 09:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:08.183 09:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.183 09:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.183 09:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:08.183 09:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.183 09:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.183 09:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.183 09:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:08.183 09:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:17:08.183 09:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:08.183 09:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:08.183 09:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:08.183 09:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:08.183 09:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTRkN2MyN2I0NWI5YTU3Mzk1NDYyMzM2NTgwMzk0ZDNhNzc0NWQxNjI0ZmU2NjU1Z6QlMg==: 00:17:08.183 09:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU5YmY5MzlmYTZhNjYyNzliODRiOTE2YTQ4ODRiOWUxM3gX: 00:17:08.183 09:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:08.183 09:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:08.183 09:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTRkN2MyN2I0NWI5YTU3Mzk1NDYyMzM2NTgwMzk0ZDNhNzc0NWQxNjI0ZmU2NjU1Z6QlMg==: 00:17:08.183 09:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU5YmY5MzlmYTZhNjYyNzliODRiOTE2YTQ4ODRiOWUxM3gX: ]] 00:17:08.183 09:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU5YmY5MzlmYTZhNjYyNzliODRiOTE2YTQ4ODRiOWUxM3gX: 00:17:08.183 09:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:17:08.183 09:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:08.183 09:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:08.183 09:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:08.183 09:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:08.183 09:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:08.183 09:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:08.183 09:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.183 09:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.183 09:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.183 09:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:08.183 09:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:08.183 09:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:08.183 09:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:08.183 09:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:08.183 09:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:08.183 09:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:08.183 09:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:08.183 09:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:08.183 09:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:08.183 09:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:08.183 09:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:08.183 09:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.183 09:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.863 nvme0n1 00:17:08.863 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.863 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:08.863 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:08.863 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.863 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.863 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.863 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.863 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:08.863 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.863 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.863 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.863 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:08.863 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:17:08.863 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:08.863 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:08.863 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:08.863 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:08.863 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmVjOTc2MDUzNzc2MjMyMzVlYjJkN2Q3OTJlZjA5NTE0YTM0ZjkwZGExNzc0Y2RlOThmODUwMDgzNDIxZTBiZMFP5uY=: 00:17:08.863 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:08.863 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:08.863 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:08.863 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmVjOTc2MDUzNzc2MjMyMzVlYjJkN2Q3OTJlZjA5NTE0YTM0ZjkwZGExNzc0Y2RlOThmODUwMDgzNDIxZTBiZMFP5uY=: 00:17:08.863 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:08.863 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:17:08.863 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:08.863 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:08.863 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:08.863 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:08.863 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:08.863 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:08.863 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.863 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.863 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.863 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:08.863 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:08.863 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:08.863 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:08.863 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:08.863 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:08.863 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:08.863 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:08.863 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:08.863 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:08.863 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:08.863 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:08.863 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.863 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.431 nvme0n1 00:17:09.431 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.431 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.431 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:09.431 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.431 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.431 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.431 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.431 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:09.431 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.431 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.431 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.431 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:09.431 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:09.431 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:09.431 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:09.431 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:09.431 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGJhYWMzY2VlNDM4NTY3Y2RjZTAwZDE2N2NhOTVlYjY5MzdiMTBjMThhNmQyYmY1YDYdQg==: 00:17:09.431 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzllMmVlNWFkNzRjMzAxMjE3MWUwZjI1MjAxMWUyNGEzYmUzNzQyN2I5MjQ3MGM0ZvkiHA==: 00:17:09.431 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:09.431 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:09.431 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGJhYWMzY2VlNDM4NTY3Y2RjZTAwZDE2N2NhOTVlYjY5MzdiMTBjMThhNmQyYmY1YDYdQg==: 00:17:09.431 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzllMmVlNWFkNzRjMzAxMjE3MWUwZjI1MjAxMWUyNGEzYmUzNzQyN2I5MjQ3MGM0ZvkiHA==: ]] 00:17:09.431 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzllMmVlNWFkNzRjMzAxMjE3MWUwZjI1MjAxMWUyNGEzYmUzNzQyN2I5MjQ3MGM0ZvkiHA==: 00:17:09.431 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:09.431 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.431 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.431 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.431 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:17:09.431 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:09.431 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:09.431 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:09.431 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.431 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.431 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:09.431 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.431 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:09.431 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:09.431 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:09.431 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:09.431 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:17:09.432 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:09.432 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:09.432 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:09.432 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:09.432 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:09.432 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:09.432 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.432 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.432 request: 00:17:09.432 { 00:17:09.432 "name": "nvme0", 00:17:09.432 "trtype": "tcp", 00:17:09.432 "traddr": "10.0.0.1", 00:17:09.432 "adrfam": "ipv4", 00:17:09.432 "trsvcid": "4420", 00:17:09.432 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:09.432 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:09.432 "prchk_reftag": false, 00:17:09.432 "prchk_guard": false, 00:17:09.432 "hdgst": false, 00:17:09.432 "ddgst": false, 00:17:09.432 "allow_unrecognized_csi": false, 00:17:09.432 "method": "bdev_nvme_attach_controller", 00:17:09.432 "req_id": 1 00:17:09.432 } 00:17:09.432 Got JSON-RPC error response 00:17:09.432 response: 00:17:09.432 { 00:17:09.432 "code": -5, 00:17:09.432 "message": "Input/output error" 00:17:09.432 } 00:17:09.432 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:09.432 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:17:09.432 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:09.432 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:09.432 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:09.432 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.432 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:17:09.432 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.432 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.432 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.432 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:17:09.432 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:17:09.432 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:09.432 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:09.432 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:09.432 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.432 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.432 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:09.432 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.432 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:09.432 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:09.432 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:09.432 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:09.432 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:17:09.432 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:09.432 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:09.432 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:09.432 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:09.432 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:09.432 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:09.432 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.432 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.432 request: 00:17:09.432 { 00:17:09.432 "name": "nvme0", 00:17:09.432 "trtype": "tcp", 00:17:09.432 "traddr": "10.0.0.1", 00:17:09.432 "adrfam": "ipv4", 00:17:09.432 "trsvcid": "4420", 00:17:09.432 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:09.432 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:09.432 "prchk_reftag": false, 00:17:09.432 "prchk_guard": false, 00:17:09.432 "hdgst": false, 00:17:09.432 "ddgst": false, 00:17:09.432 "dhchap_key": "key2", 00:17:09.432 "allow_unrecognized_csi": false, 00:17:09.432 "method": "bdev_nvme_attach_controller", 00:17:09.432 "req_id": 1 00:17:09.432 } 00:17:09.432 Got JSON-RPC error response 00:17:09.432 response: 00:17:09.432 { 00:17:09.432 "code": -5, 00:17:09.432 "message": "Input/output error" 00:17:09.432 } 00:17:09.432 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:09.432 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:17:09.432 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:09.432 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:09.432 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:09.432 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.432 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:17:09.432 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.432 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.692 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.692 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:17:09.692 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:17:09.692 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:09.692 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:09.692 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:09.692 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.692 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.692 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:09.692 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.692 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:09.692 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:09.692 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:09.692 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:09.692 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:17:09.692 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:09.692 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:09.692 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:09.692 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:09.692 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:09.692 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:09.692 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.692 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.692 request: 00:17:09.692 { 00:17:09.692 "name": "nvme0", 00:17:09.692 "trtype": "tcp", 00:17:09.692 "traddr": "10.0.0.1", 00:17:09.692 "adrfam": "ipv4", 00:17:09.692 "trsvcid": "4420", 00:17:09.692 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:09.692 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:09.692 "prchk_reftag": false, 00:17:09.692 "prchk_guard": false, 00:17:09.692 "hdgst": false, 00:17:09.692 "ddgst": false, 00:17:09.692 "dhchap_key": "key1", 00:17:09.692 "dhchap_ctrlr_key": "ckey2", 00:17:09.692 "allow_unrecognized_csi": false, 00:17:09.692 "method": "bdev_nvme_attach_controller", 00:17:09.692 "req_id": 1 00:17:09.692 } 00:17:09.692 Got JSON-RPC error response 00:17:09.692 response: 00:17:09.692 { 00:17:09.692 "code": -5, 00:17:09.692 "message": "Input/output error" 00:17:09.692 } 00:17:09.692 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:09.692 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:17:09.692 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:09.692 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:09.692 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:09.692 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:17:09.692 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:09.692 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:09.692 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:09.692 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.692 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.692 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:09.692 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.692 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:09.692 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:09.692 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:09.692 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:09.692 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.692 09:31:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.692 nvme0n1 00:17:09.692 09:31:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.692 09:31:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:09.692 09:31:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:09.692 09:31:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:09.692 09:31:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:09.692 09:31:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:09.692 09:31:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmQ3YmRkYmMyYTY0ZWNiNTIxYjhlYjAyMmM4MjAxZTYBRvFI: 00:17:09.692 09:31:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWJkOGNmZWE2MzQ2ZGRkMTdiZTI4YjcxMjAwZjdmNGQY+CMz: 00:17:09.692 09:31:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:09.692 09:31:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:09.692 09:31:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmQ3YmRkYmMyYTY0ZWNiNTIxYjhlYjAyMmM4MjAxZTYBRvFI: 00:17:09.692 09:31:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWJkOGNmZWE2MzQ2ZGRkMTdiZTI4YjcxMjAwZjdmNGQY+CMz: ]] 00:17:09.692 09:31:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWJkOGNmZWE2MzQ2ZGRkMTdiZTI4YjcxMjAwZjdmNGQY+CMz: 00:17:09.692 09:31:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.692 09:31:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.692 09:31:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.692 09:31:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.692 09:31:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.692 09:31:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:17:09.692 09:31:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.692 09:31:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.692 09:31:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.951 09:31:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.951 09:31:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:09.951 09:31:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:17:09.951 09:31:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:09.951 09:31:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:09.951 09:31:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:09.951 09:31:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:09.951 09:31:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:09.951 09:31:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:09.951 09:31:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.951 09:31:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.951 request: 00:17:09.951 { 00:17:09.951 "name": "nvme0", 00:17:09.951 "dhchap_key": "key1", 00:17:09.951 "dhchap_ctrlr_key": "ckey2", 00:17:09.951 "method": "bdev_nvme_set_keys", 00:17:09.951 "req_id": 1 00:17:09.951 } 00:17:09.951 Got JSON-RPC error response 00:17:09.951 response: 00:17:09.951 { 00:17:09.951 "code": -13, 00:17:09.951 "message": "Permission denied" 00:17:09.951 } 00:17:09.951 09:31:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:09.951 09:31:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:17:09.951 09:31:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:09.951 09:31:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:09.951 09:31:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:09.951 09:31:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.951 09:31:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:17:09.951 09:31:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.951 09:31:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.951 09:31:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.951 09:31:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:17:09.951 09:31:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:17:10.887 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:17:10.888 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:17:10.888 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.888 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.888 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.888 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:17:10.888 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:10.888 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:10.888 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:10.888 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:10.888 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:10.888 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGJhYWMzY2VlNDM4NTY3Y2RjZTAwZDE2N2NhOTVlYjY5MzdiMTBjMThhNmQyYmY1YDYdQg==: 00:17:10.888 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzllMmVlNWFkNzRjMzAxMjE3MWUwZjI1MjAxMWUyNGEzYmUzNzQyN2I5MjQ3MGM0ZvkiHA==: 00:17:10.888 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:10.888 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:10.888 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGJhYWMzY2VlNDM4NTY3Y2RjZTAwZDE2N2NhOTVlYjY5MzdiMTBjMThhNmQyYmY1YDYdQg==: 00:17:10.888 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzllMmVlNWFkNzRjMzAxMjE3MWUwZjI1MjAxMWUyNGEzYmUzNzQyN2I5MjQ3MGM0ZvkiHA==: ]] 00:17:10.888 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzllMmVlNWFkNzRjMzAxMjE3MWUwZjI1MjAxMWUyNGEzYmUzNzQyN2I5MjQ3MGM0ZvkiHA==: 00:17:10.888 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:17:10.888 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:10.888 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:10.888 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:10.888 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:10.888 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:10.888 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:10.888 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:10.888 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:10.888 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:10.888 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:10.888 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:10.888 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.888 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.146 nvme0n1 00:17:11.146 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.146 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:11.146 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:11.146 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:11.146 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:11.146 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:11.146 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmQ3YmRkYmMyYTY0ZWNiNTIxYjhlYjAyMmM4MjAxZTYBRvFI: 00:17:11.146 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWJkOGNmZWE2MzQ2ZGRkMTdiZTI4YjcxMjAwZjdmNGQY+CMz: 00:17:11.146 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:11.146 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:11.147 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmQ3YmRkYmMyYTY0ZWNiNTIxYjhlYjAyMmM4MjAxZTYBRvFI: 00:17:11.147 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWJkOGNmZWE2MzQ2ZGRkMTdiZTI4YjcxMjAwZjdmNGQY+CMz: ]] 00:17:11.147 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWJkOGNmZWE2MzQ2ZGRkMTdiZTI4YjcxMjAwZjdmNGQY+CMz: 00:17:11.147 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:17:11.147 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:17:11.147 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:17:11.147 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:11.147 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:11.147 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:11.147 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:11.147 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:17:11.147 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.147 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.147 request: 00:17:11.147 { 00:17:11.147 "name": "nvme0", 00:17:11.147 "dhchap_key": "key2", 00:17:11.147 "dhchap_ctrlr_key": "ckey1", 00:17:11.147 "method": "bdev_nvme_set_keys", 00:17:11.147 "req_id": 1 00:17:11.147 } 00:17:11.147 Got JSON-RPC error response 00:17:11.147 response: 00:17:11.147 { 00:17:11.147 "code": -13, 00:17:11.147 "message": "Permission denied" 00:17:11.147 } 00:17:11.147 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:11.147 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:17:11.147 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:11.147 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:11.147 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:11.147 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:17:11.147 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:17:11.147 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.147 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.147 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.147 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:17:11.147 09:31:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:17:12.086 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:17:12.086 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:17:12.086 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.086 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.086 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.345 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:17:12.345 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:17:12.345 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:17:12.345 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:17:12.345 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:12.345 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:17:12.345 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:12.345 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:17:12.345 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:12.345 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:12.345 rmmod nvme_tcp 00:17:12.345 rmmod nvme_fabrics 00:17:12.345 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:12.345 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:17:12.345 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:17:12.345 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@515 -- # '[' -n 77747 ']' 00:17:12.345 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # killprocess 77747 00:17:12.345 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 77747 ']' 00:17:12.345 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 77747 00:17:12.345 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:17:12.345 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:12.345 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77747 00:17:12.345 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:12.345 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:12.345 killing process with pid 77747 00:17:12.345 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77747' 00:17:12.345 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 77747 00:17:12.345 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 77747 00:17:12.345 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:12.345 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:17:12.345 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:17:12.345 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:17:12.345 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:17:12.345 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-save 00:17:12.345 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-restore 00:17:12.345 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:12.345 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:12.345 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:12.604 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:12.604 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:12.604 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:12.604 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:12.604 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:12.605 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:12.605 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:12.605 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:12.605 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:12.605 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:12.605 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:12.605 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:12.605 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:12.605 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:12.605 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:12.605 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:12.605 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:17:12.605 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:12.605 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:12.605 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:17:12.605 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:17:12.605 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # echo 0 00:17:12.605 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:12.605 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:12.605 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:12.605 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:12.605 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:17:12.605 09:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:17:12.863 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:13.431 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:13.431 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:13.431 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:13.689 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.DZe /tmp/spdk.key-null.Hx2 /tmp/spdk.key-sha256.1mv /tmp/spdk.key-sha384.RHy /tmp/spdk.key-sha512.lI4 /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:17:13.689 09:31:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:13.948 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:13.948 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:13.948 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:13.948 ************************************ 00:17:13.948 END TEST nvmf_auth_host 00:17:13.948 ************************************ 00:17:13.948 00:17:13.948 real 0m35.360s 00:17:13.948 user 0m32.591s 00:17:13.948 sys 0m3.872s 00:17:13.948 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:13.948 09:31:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.948 09:31:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:17:13.948 09:31:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:13.948 09:31:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:13.948 09:31:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:13.948 09:31:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.948 ************************************ 00:17:13.948 START TEST nvmf_digest 00:17:13.948 ************************************ 00:17:13.948 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:14.207 * Looking for test storage... 00:17:14.207 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:14.207 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:14.207 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:17:14.207 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:14.207 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:14.207 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:14.207 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:14.207 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:14.207 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:17:14.207 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:17:14.207 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:17:14.207 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:17:14.207 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:17:14.207 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:14.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.208 --rc genhtml_branch_coverage=1 00:17:14.208 --rc genhtml_function_coverage=1 00:17:14.208 --rc genhtml_legend=1 00:17:14.208 --rc geninfo_all_blocks=1 00:17:14.208 --rc geninfo_unexecuted_blocks=1 00:17:14.208 00:17:14.208 ' 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:14.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.208 --rc genhtml_branch_coverage=1 00:17:14.208 --rc genhtml_function_coverage=1 00:17:14.208 --rc genhtml_legend=1 00:17:14.208 --rc geninfo_all_blocks=1 00:17:14.208 --rc geninfo_unexecuted_blocks=1 00:17:14.208 00:17:14.208 ' 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:14.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.208 --rc genhtml_branch_coverage=1 00:17:14.208 --rc genhtml_function_coverage=1 00:17:14.208 --rc genhtml_legend=1 00:17:14.208 --rc geninfo_all_blocks=1 00:17:14.208 --rc geninfo_unexecuted_blocks=1 00:17:14.208 00:17:14.208 ' 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:14.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.208 --rc genhtml_branch_coverage=1 00:17:14.208 --rc genhtml_function_coverage=1 00:17:14.208 --rc genhtml_legend=1 00:17:14.208 --rc geninfo_all_blocks=1 00:17:14.208 --rc geninfo_unexecuted_blocks=1 00:17:14.208 00:17:14.208 ' 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5989d9e2-d339-420e-a2f4-bd87604f111f 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:14.208 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@458 -- # nvmf_veth_init 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:14.208 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:14.209 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:14.209 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:14.209 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:14.209 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:14.209 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:14.209 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:14.209 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:14.209 Cannot find device "nvmf_init_br" 00:17:14.209 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:17:14.209 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:14.209 Cannot find device "nvmf_init_br2" 00:17:14.209 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:17:14.209 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:14.209 Cannot find device "nvmf_tgt_br" 00:17:14.209 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:17:14.209 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:14.209 Cannot find device "nvmf_tgt_br2" 00:17:14.209 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:17:14.209 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:14.209 Cannot find device "nvmf_init_br" 00:17:14.209 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:17:14.209 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:14.209 Cannot find device "nvmf_init_br2" 00:17:14.209 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:17:14.209 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:14.209 Cannot find device "nvmf_tgt_br" 00:17:14.209 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:17:14.209 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:14.209 Cannot find device "nvmf_tgt_br2" 00:17:14.209 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:17:14.209 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:14.467 Cannot find device "nvmf_br" 00:17:14.467 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:17:14.467 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:14.467 Cannot find device "nvmf_init_if" 00:17:14.467 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:17:14.467 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:14.467 Cannot find device "nvmf_init_if2" 00:17:14.467 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:17:14.467 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:14.467 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:14.467 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:17:14.467 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:14.467 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:14.467 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:17:14.467 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:14.467 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:14.467 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:14.467 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:14.467 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:14.467 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:14.467 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:14.467 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:14.467 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:14.467 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:14.467 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:14.467 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:14.467 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:14.467 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:14.467 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:14.467 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:14.467 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:14.467 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:14.467 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:14.467 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:14.467 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:14.468 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:14.468 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:14.468 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:14.468 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:14.726 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:14.726 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:14.726 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:14.726 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:14.726 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:14.726 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:14.726 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:14.726 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:14.726 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:14.726 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:17:14.726 00:17:14.726 --- 10.0.0.3 ping statistics --- 00:17:14.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.726 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:17:14.726 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:14.726 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:14.726 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:17:14.726 00:17:14.726 --- 10.0.0.4 ping statistics --- 00:17:14.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.726 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:17:14.726 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:14.726 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:14.727 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:17:14.727 00:17:14.727 --- 10.0.0.1 ping statistics --- 00:17:14.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.727 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:17:14.727 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:14.727 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:14.727 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:17:14.727 00:17:14.727 --- 10.0.0.2 ping statistics --- 00:17:14.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.727 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:17:14.727 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:14.727 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # return 0 00:17:14.727 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:14.727 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:14.727 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:17:14.727 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:17:14.727 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:14.727 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:17:14.727 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:17:14.727 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:14.727 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:17:14.727 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:17:14.727 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:14.727 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:14.727 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:14.727 ************************************ 00:17:14.727 START TEST nvmf_digest_clean 00:17:14.727 ************************************ 00:17:14.727 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:17:14.727 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:17:14.727 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:17:14.727 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:17:14.727 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:17:14.727 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:17:14.727 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:14.727 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:14.727 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:14.727 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # nvmfpid=79374 00:17:14.727 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:14.727 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # waitforlisten 79374 00:17:14.727 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 79374 ']' 00:17:14.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:14.727 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:14.727 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:14.727 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:14.727 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:14.727 09:31:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:14.727 [2024-10-16 09:31:39.014211] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:17:14.727 [2024-10-16 09:31:39.014301] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:14.986 [2024-10-16 09:31:39.156605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:14.986 [2024-10-16 09:31:39.207503] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:14.986 [2024-10-16 09:31:39.207575] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:14.986 [2024-10-16 09:31:39.207590] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:14.986 [2024-10-16 09:31:39.207600] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:14.986 [2024-10-16 09:31:39.207609] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:14.986 [2024-10-16 09:31:39.208047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.986 09:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:14.986 09:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:17:14.986 09:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:14.986 09:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:14.986 09:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:14.986 09:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:14.986 09:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:17:14.986 09:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:17:14.986 09:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:17:14.986 09:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.986 09:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:14.986 [2024-10-16 09:31:39.368488] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:15.245 null0 00:17:15.245 [2024-10-16 09:31:39.421393] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:15.245 [2024-10-16 09:31:39.445512] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:15.245 09:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.245 09:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:17:15.245 09:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:15.245 09:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:15.245 09:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:17:15.245 09:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:17:15.245 09:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:17:15.245 09:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:15.245 09:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79397 00:17:15.245 09:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79397 /var/tmp/bperf.sock 00:17:15.245 09:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:15.245 09:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 79397 ']' 00:17:15.245 09:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:15.245 09:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:15.245 09:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:15.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:15.245 09:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:15.245 09:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:15.245 [2024-10-16 09:31:39.509581] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:17:15.245 [2024-10-16 09:31:39.509858] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79397 ] 00:17:15.245 [2024-10-16 09:31:39.650511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.504 [2024-10-16 09:31:39.703879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:15.504 09:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:15.504 09:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:17:15.504 09:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:15.504 09:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:15.504 09:31:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:15.763 [2024-10-16 09:31:40.099050] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:15.763 09:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:15.763 09:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:16.021 nvme0n1 00:17:16.280 09:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:16.280 09:31:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:16.280 Running I/O for 2 seconds... 00:17:18.593 17907.00 IOPS, 69.95 MiB/s [2024-10-16T09:31:42.997Z] 17970.50 IOPS, 70.20 MiB/s 00:17:18.593 Latency(us) 00:17:18.593 [2024-10-16T09:31:42.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:18.593 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:18.593 nvme0n1 : 2.01 17964.43 70.17 0.00 0.00 7120.16 6613.18 16801.05 00:17:18.593 [2024-10-16T09:31:42.997Z] =================================================================================================================== 00:17:18.593 [2024-10-16T09:31:42.997Z] Total : 17964.43 70.17 0.00 0.00 7120.16 6613.18 16801.05 00:17:18.593 { 00:17:18.593 "results": [ 00:17:18.593 { 00:17:18.593 "job": "nvme0n1", 00:17:18.593 "core_mask": "0x2", 00:17:18.593 "workload": "randread", 00:17:18.593 "status": "finished", 00:17:18.593 "queue_depth": 128, 00:17:18.593 "io_size": 4096, 00:17:18.593 "runtime": 2.007801, 00:17:18.593 "iops": 17964.42974179214, 00:17:18.593 "mibps": 70.17355367887555, 00:17:18.593 "io_failed": 0, 00:17:18.593 "io_timeout": 0, 00:17:18.593 "avg_latency_us": 7120.15598547229, 00:17:18.593 "min_latency_us": 6613.178181818182, 00:17:18.593 "max_latency_us": 16801.04727272727 00:17:18.593 } 00:17:18.593 ], 00:17:18.593 "core_count": 1 00:17:18.593 } 00:17:18.593 09:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:18.593 09:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:18.593 09:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:18.593 09:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:18.593 | select(.opcode=="crc32c") 00:17:18.593 | "\(.module_name) \(.executed)"' 00:17:18.593 09:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:18.593 09:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:18.593 09:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:18.593 09:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:18.593 09:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:18.593 09:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79397 00:17:18.593 09:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 79397 ']' 00:17:18.593 09:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 79397 00:17:18.593 09:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:17:18.593 09:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:18.593 09:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79397 00:17:18.593 killing process with pid 79397 00:17:18.593 Received shutdown signal, test time was about 2.000000 seconds 00:17:18.593 00:17:18.593 Latency(us) 00:17:18.593 [2024-10-16T09:31:42.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:18.593 [2024-10-16T09:31:42.997Z] =================================================================================================================== 00:17:18.593 [2024-10-16T09:31:42.997Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:18.593 09:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:18.593 09:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:18.593 09:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79397' 00:17:18.593 09:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 79397 00:17:18.593 09:31:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 79397 00:17:18.852 09:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:17:18.852 09:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:18.852 09:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:18.852 09:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:17:18.852 09:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:17:18.852 09:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:17:18.852 09:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:18.852 09:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:18.852 09:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79451 00:17:18.852 09:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79451 /var/tmp/bperf.sock 00:17:18.852 09:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 79451 ']' 00:17:18.852 09:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:18.852 09:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:18.852 09:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:18.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:18.852 09:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:18.852 09:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:18.852 [2024-10-16 09:31:43.130599] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:17:18.852 [2024-10-16 09:31:43.130850] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79451 ] 00:17:18.852 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:18.852 Zero copy mechanism will not be used. 00:17:19.112 [2024-10-16 09:31:43.259564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.112 [2024-10-16 09:31:43.302588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:19.112 09:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:19.112 09:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:17:19.112 09:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:19.112 09:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:19.112 09:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:19.371 [2024-10-16 09:31:43.706052] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:19.371 09:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:19.371 09:31:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:19.938 nvme0n1 00:17:19.938 09:31:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:19.938 09:31:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:19.938 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:19.938 Zero copy mechanism will not be used. 00:17:19.938 Running I/O for 2 seconds... 00:17:21.817 8736.00 IOPS, 1092.00 MiB/s [2024-10-16T09:31:46.221Z] 8768.00 IOPS, 1096.00 MiB/s 00:17:21.817 Latency(us) 00:17:21.817 [2024-10-16T09:31:46.221Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.817 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:21.817 nvme0n1 : 2.00 8764.08 1095.51 0.00 0.00 1822.80 1630.95 7685.59 00:17:21.817 [2024-10-16T09:31:46.221Z] =================================================================================================================== 00:17:21.817 [2024-10-16T09:31:46.221Z] Total : 8764.08 1095.51 0.00 0.00 1822.80 1630.95 7685.59 00:17:21.817 { 00:17:21.817 "results": [ 00:17:21.817 { 00:17:21.817 "job": "nvme0n1", 00:17:21.817 "core_mask": "0x2", 00:17:21.817 "workload": "randread", 00:17:21.817 "status": "finished", 00:17:21.817 "queue_depth": 16, 00:17:21.817 "io_size": 131072, 00:17:21.817 "runtime": 2.00272, 00:17:21.817 "iops": 8764.08085004394, 00:17:21.817 "mibps": 1095.5101062554925, 00:17:21.817 "io_failed": 0, 00:17:21.817 "io_timeout": 0, 00:17:21.817 "avg_latency_us": 1822.799767962211, 00:17:21.817 "min_latency_us": 1630.9527272727273, 00:17:21.817 "max_latency_us": 7685.585454545455 00:17:21.817 } 00:17:21.817 ], 00:17:21.817 "core_count": 1 00:17:21.817 } 00:17:21.817 09:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:21.817 09:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:21.817 09:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:21.817 09:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:21.817 | select(.opcode=="crc32c") 00:17:21.817 | "\(.module_name) \(.executed)"' 00:17:21.817 09:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:22.385 09:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:22.385 09:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:22.385 09:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:22.385 09:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:22.385 09:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79451 00:17:22.385 09:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 79451 ']' 00:17:22.385 09:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 79451 00:17:22.385 09:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:17:22.385 09:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:22.385 09:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79451 00:17:22.385 killing process with pid 79451 00:17:22.385 Received shutdown signal, test time was about 2.000000 seconds 00:17:22.385 00:17:22.385 Latency(us) 00:17:22.385 [2024-10-16T09:31:46.789Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:22.385 [2024-10-16T09:31:46.789Z] =================================================================================================================== 00:17:22.385 [2024-10-16T09:31:46.789Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:22.385 09:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:22.385 09:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:22.385 09:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79451' 00:17:22.385 09:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 79451 00:17:22.385 09:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 79451 00:17:22.385 09:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:17:22.385 09:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:22.385 09:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:22.385 09:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:17:22.385 09:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:17:22.385 09:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:17:22.385 09:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:22.385 09:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79498 00:17:22.385 09:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:22.385 09:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79498 /var/tmp/bperf.sock 00:17:22.385 09:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 79498 ']' 00:17:22.385 09:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:22.385 09:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:22.385 09:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:22.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:22.385 09:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:22.385 09:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:22.385 [2024-10-16 09:31:46.739351] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:17:22.385 [2024-10-16 09:31:46.739618] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79498 ] 00:17:22.644 [2024-10-16 09:31:46.868618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.644 [2024-10-16 09:31:46.912605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:22.644 09:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:22.644 09:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:17:22.644 09:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:22.644 09:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:22.644 09:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:22.903 [2024-10-16 09:31:47.248634] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:22.903 09:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:22.903 09:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:23.470 nvme0n1 00:17:23.470 09:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:23.470 09:31:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:23.470 Running I/O for 2 seconds... 00:17:25.343 19432.00 IOPS, 75.91 MiB/s [2024-10-16T09:31:50.005Z] 19431.50 IOPS, 75.90 MiB/s 00:17:25.602 Latency(us) 00:17:25.602 [2024-10-16T09:31:50.006Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:25.602 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:25.602 nvme0n1 : 2.01 19426.34 75.88 0.00 0.00 6582.94 3961.95 14298.76 00:17:25.602 [2024-10-16T09:31:50.006Z] =================================================================================================================== 00:17:25.602 [2024-10-16T09:31:50.006Z] Total : 19426.34 75.88 0.00 0.00 6582.94 3961.95 14298.76 00:17:25.602 { 00:17:25.602 "results": [ 00:17:25.602 { 00:17:25.602 "job": "nvme0n1", 00:17:25.602 "core_mask": "0x2", 00:17:25.602 "workload": "randwrite", 00:17:25.602 "status": "finished", 00:17:25.602 "queue_depth": 128, 00:17:25.602 "io_size": 4096, 00:17:25.602 "runtime": 2.00712, 00:17:25.602 "iops": 19426.342221690782, 00:17:25.602 "mibps": 75.88414930347962, 00:17:25.602 "io_failed": 0, 00:17:25.602 "io_timeout": 0, 00:17:25.602 "avg_latency_us": 6582.936039412359, 00:17:25.602 "min_latency_us": 3961.949090909091, 00:17:25.602 "max_latency_us": 14298.763636363636 00:17:25.602 } 00:17:25.602 ], 00:17:25.602 "core_count": 1 00:17:25.602 } 00:17:25.602 09:31:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:25.602 09:31:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:25.602 09:31:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:25.602 09:31:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:25.602 09:31:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:25.602 | select(.opcode=="crc32c") 00:17:25.602 | "\(.module_name) \(.executed)"' 00:17:25.861 09:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:25.861 09:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:25.861 09:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:25.861 09:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:25.861 09:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79498 00:17:25.861 09:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 79498 ']' 00:17:25.861 09:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 79498 00:17:25.861 09:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:17:25.861 09:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:25.861 09:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79498 00:17:25.861 killing process with pid 79498 00:17:25.861 Received shutdown signal, test time was about 2.000000 seconds 00:17:25.861 00:17:25.861 Latency(us) 00:17:25.861 [2024-10-16T09:31:50.265Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:25.861 [2024-10-16T09:31:50.265Z] =================================================================================================================== 00:17:25.861 [2024-10-16T09:31:50.265Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:25.861 09:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:25.861 09:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:25.861 09:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79498' 00:17:25.861 09:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 79498 00:17:25.861 09:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 79498 00:17:25.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:25.861 09:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:17:25.861 09:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:25.861 09:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:25.861 09:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:17:25.861 09:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:17:25.861 09:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:17:25.861 09:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:25.861 09:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79552 00:17:25.861 09:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:25.861 09:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79552 /var/tmp/bperf.sock 00:17:25.861 09:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 79552 ']' 00:17:25.861 09:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:25.861 09:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:25.861 09:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:25.861 09:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:25.861 09:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:26.121 [2024-10-16 09:31:50.294284] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:17:26.121 [2024-10-16 09:31:50.294529] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79552 ] 00:17:26.121 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:26.121 Zero copy mechanism will not be used. 00:17:26.121 [2024-10-16 09:31:50.425418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.121 [2024-10-16 09:31:50.467965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:26.380 09:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:26.380 09:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:17:26.380 09:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:26.380 09:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:26.380 09:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:26.639 [2024-10-16 09:31:50.820151] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:26.639 09:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:26.639 09:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:26.898 nvme0n1 00:17:26.898 09:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:26.898 09:31:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:27.156 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:27.156 Zero copy mechanism will not be used. 00:17:27.156 Running I/O for 2 seconds... 00:17:29.029 7593.00 IOPS, 949.12 MiB/s [2024-10-16T09:31:53.433Z] 7602.50 IOPS, 950.31 MiB/s 00:17:29.029 Latency(us) 00:17:29.029 [2024-10-16T09:31:53.433Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:29.029 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:29.029 nvme0n1 : 2.00 7600.52 950.06 0.00 0.00 2100.07 1414.98 11200.70 00:17:29.029 [2024-10-16T09:31:53.433Z] =================================================================================================================== 00:17:29.029 [2024-10-16T09:31:53.433Z] Total : 7600.52 950.06 0.00 0.00 2100.07 1414.98 11200.70 00:17:29.029 { 00:17:29.029 "results": [ 00:17:29.029 { 00:17:29.029 "job": "nvme0n1", 00:17:29.029 "core_mask": "0x2", 00:17:29.029 "workload": "randwrite", 00:17:29.029 "status": "finished", 00:17:29.029 "queue_depth": 16, 00:17:29.029 "io_size": 131072, 00:17:29.029 "runtime": 2.00368, 00:17:29.029 "iops": 7600.515052303761, 00:17:29.029 "mibps": 950.0643815379701, 00:17:29.029 "io_failed": 0, 00:17:29.029 "io_timeout": 0, 00:17:29.029 "avg_latency_us": 2100.0661140527345, 00:17:29.029 "min_latency_us": 1414.9818181818182, 00:17:29.029 "max_latency_us": 11200.698181818181 00:17:29.029 } 00:17:29.029 ], 00:17:29.029 "core_count": 1 00:17:29.029 } 00:17:29.029 09:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:29.029 09:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:29.029 09:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:29.029 09:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:29.029 | select(.opcode=="crc32c") 00:17:29.029 | "\(.module_name) \(.executed)"' 00:17:29.029 09:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:29.288 09:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:29.288 09:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:29.288 09:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:29.288 09:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:29.288 09:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79552 00:17:29.288 09:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 79552 ']' 00:17:29.288 09:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 79552 00:17:29.288 09:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:17:29.288 09:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:29.288 09:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79552 00:17:29.288 killing process with pid 79552 00:17:29.288 Received shutdown signal, test time was about 2.000000 seconds 00:17:29.288 00:17:29.288 Latency(us) 00:17:29.288 [2024-10-16T09:31:53.692Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:29.288 [2024-10-16T09:31:53.692Z] =================================================================================================================== 00:17:29.288 [2024-10-16T09:31:53.692Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:29.288 09:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:29.288 09:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:29.288 09:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79552' 00:17:29.288 09:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 79552 00:17:29.288 09:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 79552 00:17:29.548 09:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 79374 00:17:29.548 09:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 79374 ']' 00:17:29.548 09:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 79374 00:17:29.548 09:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:17:29.548 09:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:29.548 09:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79374 00:17:29.548 killing process with pid 79374 00:17:29.548 09:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:29.548 09:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:29.548 09:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79374' 00:17:29.548 09:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 79374 00:17:29.548 09:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 79374 00:17:29.807 00:17:29.807 real 0m15.111s 00:17:29.807 user 0m29.162s 00:17:29.807 sys 0m4.536s 00:17:29.807 09:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:29.807 ************************************ 00:17:29.807 END TEST nvmf_digest_clean 00:17:29.807 ************************************ 00:17:29.807 09:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:29.807 09:31:54 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:17:29.807 09:31:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:29.807 09:31:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:29.807 09:31:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:29.807 ************************************ 00:17:29.807 START TEST nvmf_digest_error 00:17:29.807 ************************************ 00:17:29.807 09:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:17:29.807 09:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:17:29.807 09:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:29.807 09:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:29.807 09:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:29.807 09:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # nvmfpid=79628 00:17:29.807 09:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:29.807 09:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # waitforlisten 79628 00:17:29.807 09:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 79628 ']' 00:17:29.807 09:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:29.807 09:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:29.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:29.807 09:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:29.807 09:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:29.807 09:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:29.807 [2024-10-16 09:31:54.162167] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:17:29.807 [2024-10-16 09:31:54.162393] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:30.067 [2024-10-16 09:31:54.293053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.067 [2024-10-16 09:31:54.333803] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:30.067 [2024-10-16 09:31:54.333851] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:30.067 [2024-10-16 09:31:54.333875] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:30.067 [2024-10-16 09:31:54.333882] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:30.067 [2024-10-16 09:31:54.333889] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:30.067 [2024-10-16 09:31:54.334221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.067 09:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:30.067 09:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:17:30.067 09:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:30.067 09:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:30.067 09:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:30.326 09:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:30.326 09:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:17:30.326 09:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.326 09:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:30.326 [2024-10-16 09:31:54.486617] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:17:30.326 09:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.326 09:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:17:30.326 09:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:17:30.326 09:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.326 09:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:30.326 [2024-10-16 09:31:54.546789] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:30.326 null0 00:17:30.326 [2024-10-16 09:31:54.595372] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:30.326 [2024-10-16 09:31:54.619493] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:30.326 09:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.326 09:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:17:30.326 09:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:30.326 09:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:17:30.326 09:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:17:30.326 09:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:17:30.326 09:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79647 00:17:30.326 09:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79647 /var/tmp/bperf.sock 00:17:30.326 09:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:17:30.326 09:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 79647 ']' 00:17:30.326 09:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:30.326 09:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:30.326 09:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:30.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:30.326 09:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:30.326 09:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:30.326 [2024-10-16 09:31:54.673355] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:17:30.326 [2024-10-16 09:31:54.673608] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79647 ] 00:17:30.585 [2024-10-16 09:31:54.806254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.585 [2024-10-16 09:31:54.854071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:30.585 [2024-10-16 09:31:54.908186] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:30.585 09:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:30.585 09:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:17:30.585 09:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:30.586 09:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:31.152 09:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:31.152 09:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.152 09:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:31.153 09:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.153 09:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:31.153 09:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:31.153 nvme0n1 00:17:31.412 09:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:31.412 09:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.412 09:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:31.412 09:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.412 09:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:31.412 09:31:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:31.412 Running I/O for 2 seconds... 00:17:31.412 [2024-10-16 09:31:55.696206] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:31.412 [2024-10-16 09:31:55.696268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.412 [2024-10-16 09:31:55.696282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.412 [2024-10-16 09:31:55.710643] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:31.412 [2024-10-16 09:31:55.710677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.412 [2024-10-16 09:31:55.710706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.412 [2024-10-16 09:31:55.724418] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:31.412 [2024-10-16 09:31:55.724453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.412 [2024-10-16 09:31:55.724481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.412 [2024-10-16 09:31:55.738536] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:31.412 [2024-10-16 09:31:55.738599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.412 [2024-10-16 09:31:55.738628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.412 [2024-10-16 09:31:55.752789] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:31.412 [2024-10-16 09:31:55.752823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.412 [2024-10-16 09:31:55.752851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.412 [2024-10-16 09:31:55.766691] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:31.412 [2024-10-16 09:31:55.766725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.412 [2024-10-16 09:31:55.766752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.412 [2024-10-16 09:31:55.780625] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:31.412 [2024-10-16 09:31:55.780660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.412 [2024-10-16 09:31:55.780687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.412 [2024-10-16 09:31:55.794822] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:31.412 [2024-10-16 09:31:55.794855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.412 [2024-10-16 09:31:55.794882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.412 [2024-10-16 09:31:55.808748] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:31.412 [2024-10-16 09:31:55.808929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.412 [2024-10-16 09:31:55.808945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.671 [2024-10-16 09:31:55.824215] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:31.671 [2024-10-16 09:31:55.824390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:80 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.671 [2024-10-16 09:31:55.824407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.671 [2024-10-16 09:31:55.838421] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:31.671 [2024-10-16 09:31:55.838468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.671 [2024-10-16 09:31:55.838479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.671 [2024-10-16 09:31:55.852483] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:31.671 [2024-10-16 09:31:55.852518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:8361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.671 [2024-10-16 09:31:55.852545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.671 [2024-10-16 09:31:55.866408] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:31.671 [2024-10-16 09:31:55.866604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.671 [2024-10-16 09:31:55.866620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.671 [2024-10-16 09:31:55.880735] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:31.671 [2024-10-16 09:31:55.880770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.671 [2024-10-16 09:31:55.880797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.671 [2024-10-16 09:31:55.894708] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:31.671 [2024-10-16 09:31:55.894744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.671 [2024-10-16 09:31:55.894771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.671 [2024-10-16 09:31:55.908615] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:31.671 [2024-10-16 09:31:55.908648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.671 [2024-10-16 09:31:55.908674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.671 [2024-10-16 09:31:55.922727] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:31.671 [2024-10-16 09:31:55.922759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.671 [2024-10-16 09:31:55.922786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.672 [2024-10-16 09:31:55.936659] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:31.672 [2024-10-16 09:31:55.936691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.672 [2024-10-16 09:31:55.936717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.672 [2024-10-16 09:31:55.952915] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:31.672 [2024-10-16 09:31:55.952965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.672 [2024-10-16 09:31:55.952993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.672 [2024-10-16 09:31:55.969411] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:31.672 [2024-10-16 09:31:55.969679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:19869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.672 [2024-10-16 09:31:55.969697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.672 [2024-10-16 09:31:55.985413] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:31.672 [2024-10-16 09:31:55.985452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.672 [2024-10-16 09:31:55.985482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.672 [2024-10-16 09:31:56.002991] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:31.672 [2024-10-16 09:31:56.003026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.672 [2024-10-16 09:31:56.003053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.672 [2024-10-16 09:31:56.019705] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:31.672 [2024-10-16 09:31:56.019742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.672 [2024-10-16 09:31:56.019770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.672 [2024-10-16 09:31:56.035735] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:31.672 [2024-10-16 09:31:56.035770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.672 [2024-10-16 09:31:56.035797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.672 [2024-10-16 09:31:56.050901] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:31.672 [2024-10-16 09:31:56.050936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.672 [2024-10-16 09:31:56.050964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.672 [2024-10-16 09:31:56.065927] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:31.672 [2024-10-16 09:31:56.065961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.672 [2024-10-16 09:31:56.065989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.932 [2024-10-16 09:31:56.082029] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:31.932 [2024-10-16 09:31:56.082063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.932 [2024-10-16 09:31:56.082091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.932 [2024-10-16 09:31:56.097335] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:31.932 [2024-10-16 09:31:56.097372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.932 [2024-10-16 09:31:56.097385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.932 [2024-10-16 09:31:56.111860] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:31.932 [2024-10-16 09:31:56.111892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.932 [2024-10-16 09:31:56.111919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.932 [2024-10-16 09:31:56.125978] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:31.932 [2024-10-16 09:31:56.126010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.932 [2024-10-16 09:31:56.126037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.932 [2024-10-16 09:31:56.140250] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:31.932 [2024-10-16 09:31:56.140283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:8339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.932 [2024-10-16 09:31:56.140311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.932 [2024-10-16 09:31:56.154416] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:31.932 [2024-10-16 09:31:56.154449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.932 [2024-10-16 09:31:56.154476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.932 [2024-10-16 09:31:56.168561] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:31.932 [2024-10-16 09:31:56.168593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:16983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.932 [2024-10-16 09:31:56.168620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.932 [2024-10-16 09:31:56.182523] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:31.932 [2024-10-16 09:31:56.182581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.932 [2024-10-16 09:31:56.182610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.932 [2024-10-16 09:31:56.196514] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:31.932 [2024-10-16 09:31:56.196584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.932 [2024-10-16 09:31:56.196597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.932 [2024-10-16 09:31:56.210591] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:31.932 [2024-10-16 09:31:56.210643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.932 [2024-10-16 09:31:56.210654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.932 [2024-10-16 09:31:56.224593] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:31.932 [2024-10-16 09:31:56.224627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.932 [2024-10-16 09:31:56.224654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.932 [2024-10-16 09:31:56.238565] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:31.932 [2024-10-16 09:31:56.238598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.932 [2024-10-16 09:31:56.238625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.932 [2024-10-16 09:31:56.252562] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:31.932 [2024-10-16 09:31:56.252611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.932 [2024-10-16 09:31:56.252639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.932 [2024-10-16 09:31:56.266781] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:31.932 [2024-10-16 09:31:56.266966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.932 [2024-10-16 09:31:56.266982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.932 [2024-10-16 09:31:56.281010] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:31.932 [2024-10-16 09:31:56.281043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.932 [2024-10-16 09:31:56.281071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.932 [2024-10-16 09:31:56.295142] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:31.932 [2024-10-16 09:31:56.295176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.932 [2024-10-16 09:31:56.295203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.932 [2024-10-16 09:31:56.309257] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:31.932 [2024-10-16 09:31:56.309293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.932 [2024-10-16 09:31:56.309320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.932 [2024-10-16 09:31:56.324410] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:31.932 [2024-10-16 09:31:56.324444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.932 [2024-10-16 09:31:56.324470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.192 [2024-10-16 09:31:56.339613] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.192 [2024-10-16 09:31:56.339649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:11202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.192 [2024-10-16 09:31:56.339677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.192 [2024-10-16 09:31:56.354174] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.192 [2024-10-16 09:31:56.354207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.192 [2024-10-16 09:31:56.354234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.192 [2024-10-16 09:31:56.368458] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.192 [2024-10-16 09:31:56.368492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.192 [2024-10-16 09:31:56.368519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.192 [2024-10-16 09:31:56.382584] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.192 [2024-10-16 09:31:56.382618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:10287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.192 [2024-10-16 09:31:56.382644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.192 [2024-10-16 09:31:56.396655] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.192 [2024-10-16 09:31:56.396840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.192 [2024-10-16 09:31:56.396862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.192 [2024-10-16 09:31:56.411023] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.192 [2024-10-16 09:31:56.411057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.192 [2024-10-16 09:31:56.411084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.192 [2024-10-16 09:31:56.425049] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.192 [2024-10-16 09:31:56.425082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.192 [2024-10-16 09:31:56.425109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.192 [2024-10-16 09:31:56.439121] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.192 [2024-10-16 09:31:56.439154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:13734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.192 [2024-10-16 09:31:56.439181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.192 [2024-10-16 09:31:56.453318] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.192 [2024-10-16 09:31:56.453353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:24873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.192 [2024-10-16 09:31:56.453380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.192 [2024-10-16 09:31:56.467792] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.192 [2024-10-16 09:31:56.467826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.192 [2024-10-16 09:31:56.467853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.192 [2024-10-16 09:31:56.481848] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.192 [2024-10-16 09:31:56.481882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.192 [2024-10-16 09:31:56.481908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.192 [2024-10-16 09:31:56.496031] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.192 [2024-10-16 09:31:56.496064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.192 [2024-10-16 09:31:56.496090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.192 [2024-10-16 09:31:56.510112] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.192 [2024-10-16 09:31:56.510144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.192 [2024-10-16 09:31:56.510171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.192 [2024-10-16 09:31:56.524157] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.192 [2024-10-16 09:31:56.524189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:25307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.192 [2024-10-16 09:31:56.524215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.192 [2024-10-16 09:31:56.538367] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.192 [2024-10-16 09:31:56.538400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.192 [2024-10-16 09:31:56.538427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.192 [2024-10-16 09:31:56.552489] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.192 [2024-10-16 09:31:56.552523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.192 [2024-10-16 09:31:56.552565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.192 [2024-10-16 09:31:56.566927] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.192 [2024-10-16 09:31:56.567109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.192 [2024-10-16 09:31:56.567125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.192 [2024-10-16 09:31:56.582268] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.192 [2024-10-16 09:31:56.582464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.192 [2024-10-16 09:31:56.582479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.451 [2024-10-16 09:31:56.597267] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.451 [2024-10-16 09:31:56.597459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.451 [2024-10-16 09:31:56.597481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.451 [2024-10-16 09:31:56.618191] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.451 [2024-10-16 09:31:56.618225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:14403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.451 [2024-10-16 09:31:56.618252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.451 [2024-10-16 09:31:56.632231] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.451 [2024-10-16 09:31:56.632265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.451 [2024-10-16 09:31:56.632292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.451 [2024-10-16 09:31:56.646390] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.451 [2024-10-16 09:31:56.646425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.451 [2024-10-16 09:31:56.646452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.451 [2024-10-16 09:31:56.660458] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.451 [2024-10-16 09:31:56.660492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.451 [2024-10-16 09:31:56.660519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.451 17205.00 IOPS, 67.21 MiB/s [2024-10-16T09:31:56.855Z] [2024-10-16 09:31:56.676259] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.451 [2024-10-16 09:31:56.676294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.451 [2024-10-16 09:31:56.676322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.451 [2024-10-16 09:31:56.690177] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.451 [2024-10-16 09:31:56.690360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:11754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.451 [2024-10-16 09:31:56.690383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.451 [2024-10-16 09:31:56.704379] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.451 [2024-10-16 09:31:56.704413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.451 [2024-10-16 09:31:56.704440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.451 [2024-10-16 09:31:56.718931] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.451 [2024-10-16 09:31:56.718965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.451 [2024-10-16 09:31:56.718993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.451 [2024-10-16 09:31:56.733110] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.451 [2024-10-16 09:31:56.733143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.451 [2024-10-16 09:31:56.733170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.451 [2024-10-16 09:31:56.747345] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.451 [2024-10-16 09:31:56.747378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:19680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.451 [2024-10-16 09:31:56.747405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.451 [2024-10-16 09:31:56.761448] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.451 [2024-10-16 09:31:56.761694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.451 [2024-10-16 09:31:56.761711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.451 [2024-10-16 09:31:56.775909] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.451 [2024-10-16 09:31:56.775944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.451 [2024-10-16 09:31:56.775973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.451 [2024-10-16 09:31:56.790009] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.451 [2024-10-16 09:31:56.790044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.451 [2024-10-16 09:31:56.790072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.451 [2024-10-16 09:31:56.804064] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.451 [2024-10-16 09:31:56.804098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.451 [2024-10-16 09:31:56.804125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.451 [2024-10-16 09:31:56.818374] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.451 [2024-10-16 09:31:56.818407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.451 [2024-10-16 09:31:56.818435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.451 [2024-10-16 09:31:56.832414] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.451 [2024-10-16 09:31:56.832447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.451 [2024-10-16 09:31:56.832474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.451 [2024-10-16 09:31:56.847954] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.451 [2024-10-16 09:31:56.847988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.451 [2024-10-16 09:31:56.848015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.710 [2024-10-16 09:31:56.863228] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.710 [2024-10-16 09:31:56.863262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.710 [2024-10-16 09:31:56.863289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.710 [2024-10-16 09:31:56.877565] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.710 [2024-10-16 09:31:56.877778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.710 [2024-10-16 09:31:56.877794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.710 [2024-10-16 09:31:56.891991] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.710 [2024-10-16 09:31:56.892025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:25150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.710 [2024-10-16 09:31:56.892052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.710 [2024-10-16 09:31:56.906151] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.710 [2024-10-16 09:31:56.906185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.710 [2024-10-16 09:31:56.906212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.710 [2024-10-16 09:31:56.920185] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.710 [2024-10-16 09:31:56.920218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.710 [2024-10-16 09:31:56.920246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.710 [2024-10-16 09:31:56.934397] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.711 [2024-10-16 09:31:56.934430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:13086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.711 [2024-10-16 09:31:56.934457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.711 [2024-10-16 09:31:56.948833] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.711 [2024-10-16 09:31:56.948868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.711 [2024-10-16 09:31:56.948895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.711 [2024-10-16 09:31:56.962898] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.711 [2024-10-16 09:31:56.962931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.711 [2024-10-16 09:31:56.962958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.711 [2024-10-16 09:31:56.976986] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.711 [2024-10-16 09:31:56.977169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:16671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.711 [2024-10-16 09:31:56.977208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.711 [2024-10-16 09:31:56.991695] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.711 [2024-10-16 09:31:56.991876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.711 [2024-10-16 09:31:56.991892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.711 [2024-10-16 09:31:57.006984] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.711 [2024-10-16 09:31:57.007035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:18464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.711 [2024-10-16 09:31:57.007062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.711 [2024-10-16 09:31:57.023353] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.711 [2024-10-16 09:31:57.023388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.711 [2024-10-16 09:31:57.023416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.711 [2024-10-16 09:31:57.039359] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.711 [2024-10-16 09:31:57.039398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:20334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.711 [2024-10-16 09:31:57.039425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.711 [2024-10-16 09:31:57.054456] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.711 [2024-10-16 09:31:57.054490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:24243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.711 [2024-10-16 09:31:57.054517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.711 [2024-10-16 09:31:57.068744] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.711 [2024-10-16 09:31:57.068777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:13410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.711 [2024-10-16 09:31:57.068803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.711 [2024-10-16 09:31:57.082786] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.711 [2024-10-16 09:31:57.082820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.711 [2024-10-16 09:31:57.082847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.711 [2024-10-16 09:31:57.097773] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.711 [2024-10-16 09:31:57.097806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.711 [2024-10-16 09:31:57.097833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.711 [2024-10-16 09:31:57.113946] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.711 [2024-10-16 09:31:57.113983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.711 [2024-10-16 09:31:57.114011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.970 [2024-10-16 09:31:57.130955] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.970 [2024-10-16 09:31:57.131140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.970 [2024-10-16 09:31:57.131158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.970 [2024-10-16 09:31:57.146816] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.970 [2024-10-16 09:31:57.146852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.970 [2024-10-16 09:31:57.146880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.970 [2024-10-16 09:31:57.162055] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.970 [2024-10-16 09:31:57.162089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.970 [2024-10-16 09:31:57.162116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.970 [2024-10-16 09:31:57.177251] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.970 [2024-10-16 09:31:57.177288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.970 [2024-10-16 09:31:57.177315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.970 [2024-10-16 09:31:57.192334] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.970 [2024-10-16 09:31:57.192369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:24894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.970 [2024-10-16 09:31:57.192396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.970 [2024-10-16 09:31:57.207375] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.970 [2024-10-16 09:31:57.207409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.970 [2024-10-16 09:31:57.207437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.970 [2024-10-16 09:31:57.222510] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.971 [2024-10-16 09:31:57.222571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:25439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.971 [2024-10-16 09:31:57.222600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.971 [2024-10-16 09:31:57.237429] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.971 [2024-10-16 09:31:57.237464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.971 [2024-10-16 09:31:57.237492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.971 [2024-10-16 09:31:57.252645] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.971 [2024-10-16 09:31:57.252678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.971 [2024-10-16 09:31:57.252705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.971 [2024-10-16 09:31:57.267502] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.971 [2024-10-16 09:31:57.267536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.971 [2024-10-16 09:31:57.267592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.971 [2024-10-16 09:31:57.282610] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.972 [2024-10-16 09:31:57.282642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.972 [2024-10-16 09:31:57.282669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.972 [2024-10-16 09:31:57.296913] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.972 [2024-10-16 09:31:57.296945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.972 [2024-10-16 09:31:57.296972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.972 [2024-10-16 09:31:57.311080] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.972 [2024-10-16 09:31:57.311112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.972 [2024-10-16 09:31:57.311139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.972 [2024-10-16 09:31:57.325120] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.972 [2024-10-16 09:31:57.325151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:8700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.972 [2024-10-16 09:31:57.325177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.972 [2024-10-16 09:31:57.339465] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.972 [2024-10-16 09:31:57.339499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.972 [2024-10-16 09:31:57.339527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.972 [2024-10-16 09:31:57.353595] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.972 [2024-10-16 09:31:57.353645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:23153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.972 [2024-10-16 09:31:57.353672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.972 [2024-10-16 09:31:57.367604] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:32.973 [2024-10-16 09:31:57.367636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.973 [2024-10-16 09:31:57.367663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.232 [2024-10-16 09:31:57.382687] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:33.232 [2024-10-16 09:31:57.382719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:24795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.232 [2024-10-16 09:31:57.382745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.232 [2024-10-16 09:31:57.396877] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:33.232 [2024-10-16 09:31:57.397085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.232 [2024-10-16 09:31:57.397101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.232 [2024-10-16 09:31:57.411919] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:33.232 [2024-10-16 09:31:57.411954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.232 [2024-10-16 09:31:57.411982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.232 [2024-10-16 09:31:57.426299] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:33.232 [2024-10-16 09:31:57.426480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.232 [2024-10-16 09:31:57.426496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.232 [2024-10-16 09:31:57.440782] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:33.232 [2024-10-16 09:31:57.440961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.232 [2024-10-16 09:31:57.440977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.232 [2024-10-16 09:31:57.455060] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:33.232 [2024-10-16 09:31:57.455095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.232 [2024-10-16 09:31:57.455122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.232 [2024-10-16 09:31:57.468978] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:33.232 [2024-10-16 09:31:57.469012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.232 [2024-10-16 09:31:57.469039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.232 [2024-10-16 09:31:57.483088] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:33.232 [2024-10-16 09:31:57.483120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.232 [2024-10-16 09:31:57.483147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.232 [2024-10-16 09:31:57.497208] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:33.232 [2024-10-16 09:31:57.497259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.232 [2024-10-16 09:31:57.497286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.232 [2024-10-16 09:31:57.511562] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:33.232 [2024-10-16 09:31:57.511781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.232 [2024-10-16 09:31:57.511799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.232 [2024-10-16 09:31:57.526120] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:33.232 [2024-10-16 09:31:57.526153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.232 [2024-10-16 09:31:57.526180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.232 [2024-10-16 09:31:57.540414] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:33.232 [2024-10-16 09:31:57.540448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.232 [2024-10-16 09:31:57.540475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.232 [2024-10-16 09:31:57.560812] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:33.232 [2024-10-16 09:31:57.560846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.232 [2024-10-16 09:31:57.560873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.232 [2024-10-16 09:31:57.575062] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:33.232 [2024-10-16 09:31:57.575094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.232 [2024-10-16 09:31:57.575121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.232 [2024-10-16 09:31:57.589089] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:33.233 [2024-10-16 09:31:57.589281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.233 [2024-10-16 09:31:57.589299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.233 [2024-10-16 09:31:57.603519] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:33.233 [2024-10-16 09:31:57.603735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.233 [2024-10-16 09:31:57.603752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.233 [2024-10-16 09:31:57.618274] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:33.233 [2024-10-16 09:31:57.618453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.233 [2024-10-16 09:31:57.618470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.233 [2024-10-16 09:31:57.632550] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:33.233 [2024-10-16 09:31:57.632584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.233 [2024-10-16 09:31:57.632611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.491 [2024-10-16 09:31:57.647698] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:33.491 [2024-10-16 09:31:57.647879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.491 [2024-10-16 09:31:57.647894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.491 [2024-10-16 09:31:57.662304] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2211240) 00:17:33.491 [2024-10-16 09:31:57.662338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.491 [2024-10-16 09:31:57.662366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.491 17268.00 IOPS, 67.45 MiB/s 00:17:33.491 Latency(us) 00:17:33.491 [2024-10-16T09:31:57.895Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:33.491 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:33.491 nvme0n1 : 2.00 17290.85 67.54 0.00 0.00 7397.17 6702.55 27167.65 00:17:33.491 [2024-10-16T09:31:57.895Z] =================================================================================================================== 00:17:33.491 [2024-10-16T09:31:57.895Z] Total : 17290.85 67.54 0.00 0.00 7397.17 6702.55 27167.65 00:17:33.491 { 00:17:33.491 "results": [ 00:17:33.491 { 00:17:33.491 "job": "nvme0n1", 00:17:33.491 "core_mask": "0x2", 00:17:33.491 "workload": "randread", 00:17:33.491 "status": "finished", 00:17:33.491 "queue_depth": 128, 00:17:33.491 "io_size": 4096, 00:17:33.491 "runtime": 2.00476, 00:17:33.491 "iops": 17290.84778227818, 00:17:33.491 "mibps": 67.54237414952414, 00:17:33.491 "io_failed": 0, 00:17:33.491 "io_timeout": 0, 00:17:33.491 "avg_latency_us": 7397.171212890503, 00:17:33.491 "min_latency_us": 6702.545454545455, 00:17:33.491 "max_latency_us": 27167.65090909091 00:17:33.491 } 00:17:33.491 ], 00:17:33.491 "core_count": 1 00:17:33.491 } 00:17:33.491 09:31:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:33.491 09:31:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:33.491 09:31:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:33.491 | .driver_specific 00:17:33.491 | .nvme_error 00:17:33.491 | .status_code 00:17:33.491 | .command_transient_transport_error' 00:17:33.491 09:31:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:33.749 09:31:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 135 > 0 )) 00:17:33.749 09:31:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79647 00:17:33.749 09:31:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 79647 ']' 00:17:33.749 09:31:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 79647 00:17:33.749 09:31:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:17:33.749 09:31:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:33.749 09:31:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79647 00:17:33.749 killing process with pid 79647 00:17:33.749 Received shutdown signal, test time was about 2.000000 seconds 00:17:33.749 00:17:33.749 Latency(us) 00:17:33.749 [2024-10-16T09:31:58.153Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:33.749 [2024-10-16T09:31:58.153Z] =================================================================================================================== 00:17:33.749 [2024-10-16T09:31:58.153Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:33.749 09:31:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:33.749 09:31:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:33.749 09:31:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79647' 00:17:33.749 09:31:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 79647 00:17:33.749 09:31:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 79647 00:17:34.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:34.007 09:31:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:17:34.007 09:31:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:34.007 09:31:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:17:34.007 09:31:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:17:34.007 09:31:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:17:34.007 09:31:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:17:34.007 09:31:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79700 00:17:34.007 09:31:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79700 /var/tmp/bperf.sock 00:17:34.007 09:31:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 79700 ']' 00:17:34.007 09:31:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:34.007 09:31:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:34.007 09:31:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:34.007 09:31:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:34.007 09:31:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:34.007 [2024-10-16 09:31:58.232679] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:17:34.007 [2024-10-16 09:31:58.232919] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6I/O size of 131072 is greater than zero copy threshold (65536). 00:17:34.007 Zero copy mechanism will not be used. 00:17:34.007 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79700 ] 00:17:34.007 [2024-10-16 09:31:58.366205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.266 [2024-10-16 09:31:58.413938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:34.266 [2024-10-16 09:31:58.468079] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:34.266 09:31:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:34.266 09:31:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:17:34.266 09:31:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:34.266 09:31:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:34.524 09:31:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:34.524 09:31:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.524 09:31:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:34.524 09:31:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.524 09:31:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:34.524 09:31:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:34.783 nvme0n1 00:17:34.783 09:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:17:34.783 09:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.783 09:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:35.042 09:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.042 09:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:35.042 09:31:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:35.042 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:35.042 Zero copy mechanism will not be used. 00:17:35.042 Running I/O for 2 seconds... 00:17:35.042 [2024-10-16 09:31:59.321360] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.042 [2024-10-16 09:31:59.321583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.042 [2024-10-16 09:31:59.321603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.042 [2024-10-16 09:31:59.325593] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.042 [2024-10-16 09:31:59.325807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.042 [2024-10-16 09:31:59.325824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.042 [2024-10-16 09:31:59.329814] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.042 [2024-10-16 09:31:59.329852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.042 [2024-10-16 09:31:59.329881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.042 [2024-10-16 09:31:59.333673] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.042 [2024-10-16 09:31:59.333708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.042 [2024-10-16 09:31:59.333736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.042 [2024-10-16 09:31:59.337631] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.042 [2024-10-16 09:31:59.337670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.042 [2024-10-16 09:31:59.337684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.042 [2024-10-16 09:31:59.341514] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.042 [2024-10-16 09:31:59.341811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.042 [2024-10-16 09:31:59.341828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.042 [2024-10-16 09:31:59.345670] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.042 [2024-10-16 09:31:59.345705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.042 [2024-10-16 09:31:59.345732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.042 [2024-10-16 09:31:59.349466] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.042 [2024-10-16 09:31:59.349704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.042 [2024-10-16 09:31:59.349721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.043 [2024-10-16 09:31:59.353694] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.043 [2024-10-16 09:31:59.353730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.043 [2024-10-16 09:31:59.353757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.043 [2024-10-16 09:31:59.357462] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.043 [2024-10-16 09:31:59.357693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.043 [2024-10-16 09:31:59.357711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.043 [2024-10-16 09:31:59.361639] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.043 [2024-10-16 09:31:59.361676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.043 [2024-10-16 09:31:59.361688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.043 [2024-10-16 09:31:59.365493] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.043 [2024-10-16 09:31:59.365752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.043 [2024-10-16 09:31:59.365770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.043 [2024-10-16 09:31:59.369696] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.043 [2024-10-16 09:31:59.369731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.043 [2024-10-16 09:31:59.369759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.043 [2024-10-16 09:31:59.373625] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.043 [2024-10-16 09:31:59.373675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.043 [2024-10-16 09:31:59.373703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.043 [2024-10-16 09:31:59.377415] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.043 [2024-10-16 09:31:59.377654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.043 [2024-10-16 09:31:59.377686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.043 [2024-10-16 09:31:59.381555] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.043 [2024-10-16 09:31:59.381738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.043 [2024-10-16 09:31:59.381754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.043 [2024-10-16 09:31:59.385668] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.043 [2024-10-16 09:31:59.385703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.043 [2024-10-16 09:31:59.385730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.043 [2024-10-16 09:31:59.389487] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.043 [2024-10-16 09:31:59.389733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.043 [2024-10-16 09:31:59.389749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.043 [2024-10-16 09:31:59.393639] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.043 [2024-10-16 09:31:59.393672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.043 [2024-10-16 09:31:59.393700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.043 [2024-10-16 09:31:59.397496] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.043 [2024-10-16 09:31:59.397724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.043 [2024-10-16 09:31:59.397741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.043 [2024-10-16 09:31:59.401666] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.043 [2024-10-16 09:31:59.401701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.043 [2024-10-16 09:31:59.401729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.043 [2024-10-16 09:31:59.405536] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.043 [2024-10-16 09:31:59.405749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.043 [2024-10-16 09:31:59.405765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.043 [2024-10-16 09:31:59.409711] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.043 [2024-10-16 09:31:59.409745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.043 [2024-10-16 09:31:59.409773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.043 [2024-10-16 09:31:59.413644] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.043 [2024-10-16 09:31:59.413678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.043 [2024-10-16 09:31:59.413706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.043 [2024-10-16 09:31:59.417533] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.043 [2024-10-16 09:31:59.417761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.043 [2024-10-16 09:31:59.417778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.043 [2024-10-16 09:31:59.421674] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.043 [2024-10-16 09:31:59.421710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.043 [2024-10-16 09:31:59.421722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.043 [2024-10-16 09:31:59.425454] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.043 [2024-10-16 09:31:59.425714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.043 [2024-10-16 09:31:59.425731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.043 [2024-10-16 09:31:59.429670] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.043 [2024-10-16 09:31:59.429705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.043 [2024-10-16 09:31:59.429733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.043 [2024-10-16 09:31:59.433558] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.043 [2024-10-16 09:31:59.433768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.043 [2024-10-16 09:31:59.433784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.043 [2024-10-16 09:31:59.437638] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.043 [2024-10-16 09:31:59.437676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.043 [2024-10-16 09:31:59.437687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.043 [2024-10-16 09:31:59.441451] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.043 [2024-10-16 09:31:59.441715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.043 [2024-10-16 09:31:59.441733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.043 [2024-10-16 09:31:59.445898] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.043 [2024-10-16 09:31:59.445934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.043 [2024-10-16 09:31:59.445961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.304 [2024-10-16 09:31:59.450056] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.304 [2024-10-16 09:31:59.450091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.304 [2024-10-16 09:31:59.450118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.304 [2024-10-16 09:31:59.454308] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.304 [2024-10-16 09:31:59.454344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.304 [2024-10-16 09:31:59.454372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.304 [2024-10-16 09:31:59.458308] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.304 [2024-10-16 09:31:59.458343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.304 [2024-10-16 09:31:59.458371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.304 [2024-10-16 09:31:59.462173] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.304 [2024-10-16 09:31:59.462207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.304 [2024-10-16 09:31:59.462234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.304 [2024-10-16 09:31:59.466152] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.304 [2024-10-16 09:31:59.466190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.304 [2024-10-16 09:31:59.466218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.304 [2024-10-16 09:31:59.470444] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.304 [2024-10-16 09:31:59.470481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.304 [2024-10-16 09:31:59.470509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.304 [2024-10-16 09:31:59.474877] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.304 [2024-10-16 09:31:59.474945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.304 [2024-10-16 09:31:59.474972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.304 [2024-10-16 09:31:59.479216] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.304 [2024-10-16 09:31:59.479298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.304 [2024-10-16 09:31:59.479310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.304 [2024-10-16 09:31:59.483818] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.304 [2024-10-16 09:31:59.483857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.304 [2024-10-16 09:31:59.483870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.304 [2024-10-16 09:31:59.488185] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.304 [2024-10-16 09:31:59.488221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.304 [2024-10-16 09:31:59.488248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.304 [2024-10-16 09:31:59.492388] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.304 [2024-10-16 09:31:59.492426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.304 [2024-10-16 09:31:59.492454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.304 [2024-10-16 09:31:59.496754] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.304 [2024-10-16 09:31:59.496795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.304 [2024-10-16 09:31:59.496807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.304 [2024-10-16 09:31:59.501260] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.304 [2024-10-16 09:31:59.501302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.304 [2024-10-16 09:31:59.501316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.304 [2024-10-16 09:31:59.505589] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.304 [2024-10-16 09:31:59.505669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.304 [2024-10-16 09:31:59.505698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.305 [2024-10-16 09:31:59.509995] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.305 [2024-10-16 09:31:59.510032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.305 [2024-10-16 09:31:59.510044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.305 [2024-10-16 09:31:59.514052] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.305 [2024-10-16 09:31:59.514088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.305 [2024-10-16 09:31:59.514115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.305 [2024-10-16 09:31:59.518011] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.305 [2024-10-16 09:31:59.518047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.305 [2024-10-16 09:31:59.518074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.305 [2024-10-16 09:31:59.521965] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.305 [2024-10-16 09:31:59.522001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.305 [2024-10-16 09:31:59.522028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.305 [2024-10-16 09:31:59.526390] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.305 [2024-10-16 09:31:59.526429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.305 [2024-10-16 09:31:59.526442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.305 [2024-10-16 09:31:59.531071] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.305 [2024-10-16 09:31:59.531123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.305 [2024-10-16 09:31:59.531151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.305 [2024-10-16 09:31:59.535214] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.305 [2024-10-16 09:31:59.535250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.305 [2024-10-16 09:31:59.535274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.305 [2024-10-16 09:31:59.539288] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.305 [2024-10-16 09:31:59.539325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.305 [2024-10-16 09:31:59.539352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.305 [2024-10-16 09:31:59.543470] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.305 [2024-10-16 09:31:59.543507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.305 [2024-10-16 09:31:59.543520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.305 [2024-10-16 09:31:59.547517] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.305 [2024-10-16 09:31:59.547597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.305 [2024-10-16 09:31:59.547611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.305 [2024-10-16 09:31:59.551593] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.305 [2024-10-16 09:31:59.551628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.305 [2024-10-16 09:31:59.551656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.305 [2024-10-16 09:31:59.555591] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.305 [2024-10-16 09:31:59.555627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.305 [2024-10-16 09:31:59.555654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.305 [2024-10-16 09:31:59.559685] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.305 [2024-10-16 09:31:59.559732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.305 [2024-10-16 09:31:59.559743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.305 [2024-10-16 09:31:59.563701] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.305 [2024-10-16 09:31:59.563749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.305 [2024-10-16 09:31:59.563760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.305 [2024-10-16 09:31:59.567626] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.305 [2024-10-16 09:31:59.567672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.305 [2024-10-16 09:31:59.567683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.305 [2024-10-16 09:31:59.571573] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.305 [2024-10-16 09:31:59.571619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.305 [2024-10-16 09:31:59.571629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.305 [2024-10-16 09:31:59.575679] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.305 [2024-10-16 09:31:59.575726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.305 [2024-10-16 09:31:59.575737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.305 [2024-10-16 09:31:59.579640] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.305 [2024-10-16 09:31:59.579687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.305 [2024-10-16 09:31:59.579699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.305 [2024-10-16 09:31:59.583703] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.305 [2024-10-16 09:31:59.583750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.305 [2024-10-16 09:31:59.583761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.305 [2024-10-16 09:31:59.587660] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.305 [2024-10-16 09:31:59.587708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.305 [2024-10-16 09:31:59.587719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.305 [2024-10-16 09:31:59.591681] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.305 [2024-10-16 09:31:59.591728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.305 [2024-10-16 09:31:59.591739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.305 [2024-10-16 09:31:59.595582] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.305 [2024-10-16 09:31:59.595629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.305 [2024-10-16 09:31:59.595639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.305 [2024-10-16 09:31:59.599524] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.305 [2024-10-16 09:31:59.599582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.305 [2024-10-16 09:31:59.599594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.305 [2024-10-16 09:31:59.603407] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.305 [2024-10-16 09:31:59.603455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.305 [2024-10-16 09:31:59.603466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.305 [2024-10-16 09:31:59.607514] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.305 [2024-10-16 09:31:59.607562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.305 [2024-10-16 09:31:59.607580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.305 [2024-10-16 09:31:59.611414] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.305 [2024-10-16 09:31:59.611462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.305 [2024-10-16 09:31:59.611472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.305 [2024-10-16 09:31:59.615472] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.305 [2024-10-16 09:31:59.615520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.305 [2024-10-16 09:31:59.615530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.305 [2024-10-16 09:31:59.619584] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.305 [2024-10-16 09:31:59.619618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.305 [2024-10-16 09:31:59.619629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.305 [2024-10-16 09:31:59.623447] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.305 [2024-10-16 09:31:59.623495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.306 [2024-10-16 09:31:59.623506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.306 [2024-10-16 09:31:59.627458] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.306 [2024-10-16 09:31:59.627506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.306 [2024-10-16 09:31:59.627517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.306 [2024-10-16 09:31:59.631483] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.306 [2024-10-16 09:31:59.631516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.306 [2024-10-16 09:31:59.631527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.306 [2024-10-16 09:31:59.635474] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.306 [2024-10-16 09:31:59.635522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.306 [2024-10-16 09:31:59.635533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.306 [2024-10-16 09:31:59.639401] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.306 [2024-10-16 09:31:59.639449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.306 [2024-10-16 09:31:59.639460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.306 [2024-10-16 09:31:59.643582] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.306 [2024-10-16 09:31:59.643639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.306 [2024-10-16 09:31:59.643651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.306 [2024-10-16 09:31:59.647517] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.306 [2024-10-16 09:31:59.647589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.306 [2024-10-16 09:31:59.647601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.306 [2024-10-16 09:31:59.651606] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.306 [2024-10-16 09:31:59.651653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.306 [2024-10-16 09:31:59.651663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.306 [2024-10-16 09:31:59.655588] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.306 [2024-10-16 09:31:59.655634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.306 [2024-10-16 09:31:59.655644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.306 [2024-10-16 09:31:59.659461] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.306 [2024-10-16 09:31:59.659507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.306 [2024-10-16 09:31:59.659518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.306 [2024-10-16 09:31:59.663242] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.306 [2024-10-16 09:31:59.663275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.306 [2024-10-16 09:31:59.663286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.306 [2024-10-16 09:31:59.667183] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.306 [2024-10-16 09:31:59.667229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.306 [2024-10-16 09:31:59.667240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.306 [2024-10-16 09:31:59.670981] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.306 [2024-10-16 09:31:59.671028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.306 [2024-10-16 09:31:59.671038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.306 [2024-10-16 09:31:59.674868] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.306 [2024-10-16 09:31:59.674915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.306 [2024-10-16 09:31:59.674925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.306 [2024-10-16 09:31:59.678673] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.306 [2024-10-16 09:31:59.678718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.306 [2024-10-16 09:31:59.678729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.306 [2024-10-16 09:31:59.682431] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.306 [2024-10-16 09:31:59.682478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.306 [2024-10-16 09:31:59.682489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.306 [2024-10-16 09:31:59.686431] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.306 [2024-10-16 09:31:59.686478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.306 [2024-10-16 09:31:59.686489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.306 [2024-10-16 09:31:59.690262] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.306 [2024-10-16 09:31:59.690309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.306 [2024-10-16 09:31:59.690319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.306 [2024-10-16 09:31:59.694077] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.306 [2024-10-16 09:31:59.694123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.306 [2024-10-16 09:31:59.694133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.306 [2024-10-16 09:31:59.697954] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.306 [2024-10-16 09:31:59.698000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.306 [2024-10-16 09:31:59.698011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.306 [2024-10-16 09:31:59.701861] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.306 [2024-10-16 09:31:59.701906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.306 [2024-10-16 09:31:59.701917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.306 [2024-10-16 09:31:59.706029] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.306 [2024-10-16 09:31:59.706062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.306 [2024-10-16 09:31:59.706073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.567 [2024-10-16 09:31:59.710291] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.567 [2024-10-16 09:31:59.710337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.567 [2024-10-16 09:31:59.710348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.567 [2024-10-16 09:31:59.714325] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.567 [2024-10-16 09:31:59.714374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.567 [2024-10-16 09:31:59.714385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.567 [2024-10-16 09:31:59.718361] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.567 [2024-10-16 09:31:59.718407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.567 [2024-10-16 09:31:59.718418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.567 [2024-10-16 09:31:59.722188] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.567 [2024-10-16 09:31:59.722234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.567 [2024-10-16 09:31:59.722245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.567 [2024-10-16 09:31:59.726079] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.567 [2024-10-16 09:31:59.726125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.567 [2024-10-16 09:31:59.726136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.567 [2024-10-16 09:31:59.730074] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.567 [2024-10-16 09:31:59.730120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.567 [2024-10-16 09:31:59.730131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.567 [2024-10-16 09:31:59.733936] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.567 [2024-10-16 09:31:59.733982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.567 [2024-10-16 09:31:59.733992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.567 [2024-10-16 09:31:59.737835] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.567 [2024-10-16 09:31:59.737881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.567 [2024-10-16 09:31:59.737892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.567 [2024-10-16 09:31:59.741679] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.567 [2024-10-16 09:31:59.741724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.567 [2024-10-16 09:31:59.741735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.567 [2024-10-16 09:31:59.745429] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.567 [2024-10-16 09:31:59.745462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.568 [2024-10-16 09:31:59.745473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.568 [2024-10-16 09:31:59.749265] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.568 [2024-10-16 09:31:59.749312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.568 [2024-10-16 09:31:59.749324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.568 [2024-10-16 09:31:59.753122] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.568 [2024-10-16 09:31:59.753169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.568 [2024-10-16 09:31:59.753206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.568 [2024-10-16 09:31:59.757085] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.568 [2024-10-16 09:31:59.757117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.568 [2024-10-16 09:31:59.757128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.568 [2024-10-16 09:31:59.760952] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.568 [2024-10-16 09:31:59.760985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.568 [2024-10-16 09:31:59.760995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.568 [2024-10-16 09:31:59.764807] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.568 [2024-10-16 09:31:59.764854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.568 [2024-10-16 09:31:59.764866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.568 [2024-10-16 09:31:59.768585] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.568 [2024-10-16 09:31:59.768631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.568 [2024-10-16 09:31:59.768642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.568 [2024-10-16 09:31:59.772301] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.568 [2024-10-16 09:31:59.772348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.568 [2024-10-16 09:31:59.772359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.568 [2024-10-16 09:31:59.776188] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.568 [2024-10-16 09:31:59.776221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.568 [2024-10-16 09:31:59.776232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.568 [2024-10-16 09:31:59.779967] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.568 [2024-10-16 09:31:59.780014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.568 [2024-10-16 09:31:59.780024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.568 [2024-10-16 09:31:59.783962] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.568 [2024-10-16 09:31:59.784022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.568 [2024-10-16 09:31:59.784033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.568 [2024-10-16 09:31:59.788170] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.568 [2024-10-16 09:31:59.788218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.568 [2024-10-16 09:31:59.788230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.568 [2024-10-16 09:31:59.792433] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.568 [2024-10-16 09:31:59.792466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.568 [2024-10-16 09:31:59.792477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.568 [2024-10-16 09:31:59.796331] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.568 [2024-10-16 09:31:59.796364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.568 [2024-10-16 09:31:59.796375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.568 [2024-10-16 09:31:59.800282] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.568 [2024-10-16 09:31:59.800315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.568 [2024-10-16 09:31:59.800326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.568 [2024-10-16 09:31:59.804008] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.568 [2024-10-16 09:31:59.804055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.568 [2024-10-16 09:31:59.804065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.568 [2024-10-16 09:31:59.807829] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.568 [2024-10-16 09:31:59.807876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.568 [2024-10-16 09:31:59.807887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.568 [2024-10-16 09:31:59.811650] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.568 [2024-10-16 09:31:59.811695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.568 [2024-10-16 09:31:59.811706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.568 [2024-10-16 09:31:59.815526] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.568 [2024-10-16 09:31:59.815581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.568 [2024-10-16 09:31:59.815592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.568 [2024-10-16 09:31:59.819301] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.568 [2024-10-16 09:31:59.819348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.568 [2024-10-16 09:31:59.819358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.568 [2024-10-16 09:31:59.823189] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.568 [2024-10-16 09:31:59.823236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.568 [2024-10-16 09:31:59.823247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.568 [2024-10-16 09:31:59.827015] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.568 [2024-10-16 09:31:59.827061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.568 [2024-10-16 09:31:59.827071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.568 [2024-10-16 09:31:59.830811] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.568 [2024-10-16 09:31:59.830857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.568 [2024-10-16 09:31:59.830867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.568 [2024-10-16 09:31:59.834585] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.568 [2024-10-16 09:31:59.834631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.568 [2024-10-16 09:31:59.834642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.568 [2024-10-16 09:31:59.838402] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.568 [2024-10-16 09:31:59.838434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.568 [2024-10-16 09:31:59.838444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.568 [2024-10-16 09:31:59.842225] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.568 [2024-10-16 09:31:59.842271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.568 [2024-10-16 09:31:59.842281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.568 [2024-10-16 09:31:59.846063] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.568 [2024-10-16 09:31:59.846108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.568 [2024-10-16 09:31:59.846119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.569 [2024-10-16 09:31:59.849939] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.569 [2024-10-16 09:31:59.849985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.569 [2024-10-16 09:31:59.849995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.569 [2024-10-16 09:31:59.853817] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.569 [2024-10-16 09:31:59.853864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.569 [2024-10-16 09:31:59.853874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.569 [2024-10-16 09:31:59.857652] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.569 [2024-10-16 09:31:59.857697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.569 [2024-10-16 09:31:59.857708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.569 [2024-10-16 09:31:59.861467] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.569 [2024-10-16 09:31:59.861500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.569 [2024-10-16 09:31:59.861511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.569 [2024-10-16 09:31:59.865276] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.569 [2024-10-16 09:31:59.865324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.569 [2024-10-16 09:31:59.865335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.569 [2024-10-16 09:31:59.869092] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.569 [2024-10-16 09:31:59.869138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.569 [2024-10-16 09:31:59.869149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.569 [2024-10-16 09:31:59.873029] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.569 [2024-10-16 09:31:59.873062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.569 [2024-10-16 09:31:59.873073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.569 [2024-10-16 09:31:59.876849] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.569 [2024-10-16 09:31:59.876881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.569 [2024-10-16 09:31:59.876891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.569 [2024-10-16 09:31:59.880660] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.569 [2024-10-16 09:31:59.880692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.569 [2024-10-16 09:31:59.880703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.569 [2024-10-16 09:31:59.884480] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.569 [2024-10-16 09:31:59.884513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.569 [2024-10-16 09:31:59.884523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.569 [2024-10-16 09:31:59.888262] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.569 [2024-10-16 09:31:59.888308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.569 [2024-10-16 09:31:59.888318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.569 [2024-10-16 09:31:59.892087] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.569 [2024-10-16 09:31:59.892134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.569 [2024-10-16 09:31:59.892145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.569 [2024-10-16 09:31:59.895846] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.569 [2024-10-16 09:31:59.895892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.569 [2024-10-16 09:31:59.895903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.569 [2024-10-16 09:31:59.899747] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.569 [2024-10-16 09:31:59.899793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.569 [2024-10-16 09:31:59.899805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.569 [2024-10-16 09:31:59.903557] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.569 [2024-10-16 09:31:59.903602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.569 [2024-10-16 09:31:59.903612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.569 [2024-10-16 09:31:59.907364] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.569 [2024-10-16 09:31:59.907411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.569 [2024-10-16 09:31:59.907422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.569 [2024-10-16 09:31:59.911250] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.569 [2024-10-16 09:31:59.911296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.569 [2024-10-16 09:31:59.911307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.569 [2024-10-16 09:31:59.915090] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.569 [2024-10-16 09:31:59.915136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.569 [2024-10-16 09:31:59.915146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.569 [2024-10-16 09:31:59.918847] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.569 [2024-10-16 09:31:59.918893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.569 [2024-10-16 09:31:59.918904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.569 [2024-10-16 09:31:59.922663] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.569 [2024-10-16 09:31:59.922709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.569 [2024-10-16 09:31:59.922719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.569 [2024-10-16 09:31:59.926418] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.569 [2024-10-16 09:31:59.926465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.569 [2024-10-16 09:31:59.926476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.569 [2024-10-16 09:31:59.930380] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.569 [2024-10-16 09:31:59.930427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.569 [2024-10-16 09:31:59.930438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.569 [2024-10-16 09:31:59.934181] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.569 [2024-10-16 09:31:59.934228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.570 [2024-10-16 09:31:59.934239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.570 [2024-10-16 09:31:59.938029] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.570 [2024-10-16 09:31:59.938075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.570 [2024-10-16 09:31:59.938086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.570 [2024-10-16 09:31:59.941755] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.570 [2024-10-16 09:31:59.941801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.570 [2024-10-16 09:31:59.941812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.570 [2024-10-16 09:31:59.945484] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.570 [2024-10-16 09:31:59.945560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.570 [2024-10-16 09:31:59.945593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.570 [2024-10-16 09:31:59.949252] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.570 [2024-10-16 09:31:59.949300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.570 [2024-10-16 09:31:59.949311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.570 [2024-10-16 09:31:59.953141] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.570 [2024-10-16 09:31:59.953211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.570 [2024-10-16 09:31:59.953239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.570 [2024-10-16 09:31:59.956915] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.570 [2024-10-16 09:31:59.956961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.570 [2024-10-16 09:31:59.956972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.570 [2024-10-16 09:31:59.960768] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.570 [2024-10-16 09:31:59.960800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.570 [2024-10-16 09:31:59.960811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.570 [2024-10-16 09:31:59.964474] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.570 [2024-10-16 09:31:59.964520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.570 [2024-10-16 09:31:59.964531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.570 [2024-10-16 09:31:59.968550] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.570 [2024-10-16 09:31:59.968596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.570 [2024-10-16 09:31:59.968608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.830 [2024-10-16 09:31:59.972704] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.830 [2024-10-16 09:31:59.972750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.830 [2024-10-16 09:31:59.972777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.831 [2024-10-16 09:31:59.976513] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.831 [2024-10-16 09:31:59.976568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.831 [2024-10-16 09:31:59.976580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.831 [2024-10-16 09:31:59.980603] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.831 [2024-10-16 09:31:59.980648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.831 [2024-10-16 09:31:59.980658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.831 [2024-10-16 09:31:59.984524] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.831 [2024-10-16 09:31:59.984570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.831 [2024-10-16 09:31:59.984581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.831 [2024-10-16 09:31:59.988316] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.831 [2024-10-16 09:31:59.988349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.831 [2024-10-16 09:31:59.988360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.831 [2024-10-16 09:31:59.992187] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.831 [2024-10-16 09:31:59.992234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.831 [2024-10-16 09:31:59.992246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.831 [2024-10-16 09:31:59.995982] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.831 [2024-10-16 09:31:59.996029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.831 [2024-10-16 09:31:59.996039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.831 [2024-10-16 09:31:59.999804] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.831 [2024-10-16 09:31:59.999850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.831 [2024-10-16 09:31:59.999861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.831 [2024-10-16 09:32:00.003579] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.831 [2024-10-16 09:32:00.003625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.831 [2024-10-16 09:32:00.003635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.831 [2024-10-16 09:32:00.007393] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.831 [2024-10-16 09:32:00.007439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.831 [2024-10-16 09:32:00.007450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.831 [2024-10-16 09:32:00.011207] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.831 [2024-10-16 09:32:00.011253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.831 [2024-10-16 09:32:00.011264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.831 [2024-10-16 09:32:00.015052] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.831 [2024-10-16 09:32:00.015099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.831 [2024-10-16 09:32:00.015110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.831 [2024-10-16 09:32:00.018804] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.831 [2024-10-16 09:32:00.018850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.831 [2024-10-16 09:32:00.018861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.831 [2024-10-16 09:32:00.022634] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.831 [2024-10-16 09:32:00.022679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.831 [2024-10-16 09:32:00.022690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.831 [2024-10-16 09:32:00.026329] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.831 [2024-10-16 09:32:00.026376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.831 [2024-10-16 09:32:00.026387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.831 [2024-10-16 09:32:00.030090] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.831 [2024-10-16 09:32:00.030136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.831 [2024-10-16 09:32:00.030146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.831 [2024-10-16 09:32:00.033925] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.831 [2024-10-16 09:32:00.033987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.831 [2024-10-16 09:32:00.033998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.831 [2024-10-16 09:32:00.037748] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.831 [2024-10-16 09:32:00.037795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.831 [2024-10-16 09:32:00.037805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.831 [2024-10-16 09:32:00.041548] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.831 [2024-10-16 09:32:00.041595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.831 [2024-10-16 09:32:00.041608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.831 [2024-10-16 09:32:00.045804] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.831 [2024-10-16 09:32:00.045840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.831 [2024-10-16 09:32:00.045853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.831 [2024-10-16 09:32:00.050172] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.831 [2024-10-16 09:32:00.050220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.831 [2024-10-16 09:32:00.050231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.831 [2024-10-16 09:32:00.054069] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.831 [2024-10-16 09:32:00.054117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.831 [2024-10-16 09:32:00.054128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.831 [2024-10-16 09:32:00.057814] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.831 [2024-10-16 09:32:00.057860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.831 [2024-10-16 09:32:00.057871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.831 [2024-10-16 09:32:00.061651] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.831 [2024-10-16 09:32:00.061713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.831 [2024-10-16 09:32:00.061724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.831 [2024-10-16 09:32:00.065326] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.831 [2024-10-16 09:32:00.065377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.831 [2024-10-16 09:32:00.065390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.831 [2024-10-16 09:32:00.069145] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.831 [2024-10-16 09:32:00.069216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.831 [2024-10-16 09:32:00.069245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.831 [2024-10-16 09:32:00.073122] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.831 [2024-10-16 09:32:00.073169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.831 [2024-10-16 09:32:00.073204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.831 [2024-10-16 09:32:00.077445] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.832 [2024-10-16 09:32:00.077481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.832 [2024-10-16 09:32:00.077494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.832 [2024-10-16 09:32:00.081646] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.832 [2024-10-16 09:32:00.081694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.832 [2024-10-16 09:32:00.081706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.832 [2024-10-16 09:32:00.086088] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.832 [2024-10-16 09:32:00.086136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.832 [2024-10-16 09:32:00.086147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.832 [2024-10-16 09:32:00.090443] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.832 [2024-10-16 09:32:00.090472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.832 [2024-10-16 09:32:00.090482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.832 [2024-10-16 09:32:00.095173] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.832 [2024-10-16 09:32:00.095206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.832 [2024-10-16 09:32:00.095229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.832 [2024-10-16 09:32:00.099612] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.832 [2024-10-16 09:32:00.099672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.832 [2024-10-16 09:32:00.099685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.832 [2024-10-16 09:32:00.104138] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.832 [2024-10-16 09:32:00.104186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.832 [2024-10-16 09:32:00.104196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.832 [2024-10-16 09:32:00.108396] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.832 [2024-10-16 09:32:00.108443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.832 [2024-10-16 09:32:00.108454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.832 [2024-10-16 09:32:00.112648] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.832 [2024-10-16 09:32:00.112698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.832 [2024-10-16 09:32:00.112710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.832 [2024-10-16 09:32:00.116814] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.832 [2024-10-16 09:32:00.116863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.832 [2024-10-16 09:32:00.116876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.832 [2024-10-16 09:32:00.120986] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.832 [2024-10-16 09:32:00.121035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.832 [2024-10-16 09:32:00.121046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.832 [2024-10-16 09:32:00.124902] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.832 [2024-10-16 09:32:00.124950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.832 [2024-10-16 09:32:00.124976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.832 [2024-10-16 09:32:00.128858] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.832 [2024-10-16 09:32:00.128891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.832 [2024-10-16 09:32:00.128902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.832 [2024-10-16 09:32:00.132744] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.832 [2024-10-16 09:32:00.132777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.832 [2024-10-16 09:32:00.132788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.832 [2024-10-16 09:32:00.136677] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.832 [2024-10-16 09:32:00.136724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.832 [2024-10-16 09:32:00.136734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.832 [2024-10-16 09:32:00.140385] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.832 [2024-10-16 09:32:00.140431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.832 [2024-10-16 09:32:00.140443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.832 [2024-10-16 09:32:00.144188] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.832 [2024-10-16 09:32:00.144231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.832 [2024-10-16 09:32:00.144241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.832 [2024-10-16 09:32:00.148063] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.832 [2024-10-16 09:32:00.148109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.832 [2024-10-16 09:32:00.148120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.832 [2024-10-16 09:32:00.152021] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.832 [2024-10-16 09:32:00.152068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.832 [2024-10-16 09:32:00.152079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.832 [2024-10-16 09:32:00.155899] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.832 [2024-10-16 09:32:00.155945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.832 [2024-10-16 09:32:00.155955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.832 [2024-10-16 09:32:00.159699] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.832 [2024-10-16 09:32:00.159745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.832 [2024-10-16 09:32:00.159755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.832 [2024-10-16 09:32:00.163505] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.832 [2024-10-16 09:32:00.163562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.832 [2024-10-16 09:32:00.163575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.832 [2024-10-16 09:32:00.167289] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.832 [2024-10-16 09:32:00.167335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.832 [2024-10-16 09:32:00.167346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.832 [2024-10-16 09:32:00.171107] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.832 [2024-10-16 09:32:00.171154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.833 [2024-10-16 09:32:00.171164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.833 [2024-10-16 09:32:00.175054] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.833 [2024-10-16 09:32:00.175101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.833 [2024-10-16 09:32:00.175112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.833 [2024-10-16 09:32:00.178896] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.833 [2024-10-16 09:32:00.178942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.833 [2024-10-16 09:32:00.178953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.833 [2024-10-16 09:32:00.182721] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.833 [2024-10-16 09:32:00.182767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.833 [2024-10-16 09:32:00.182777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.833 [2024-10-16 09:32:00.186522] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.833 [2024-10-16 09:32:00.186579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.833 [2024-10-16 09:32:00.186590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.833 [2024-10-16 09:32:00.190318] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.833 [2024-10-16 09:32:00.190365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.833 [2024-10-16 09:32:00.190375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.833 [2024-10-16 09:32:00.194263] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.833 [2024-10-16 09:32:00.194310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.833 [2024-10-16 09:32:00.194322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.833 [2024-10-16 09:32:00.198145] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.833 [2024-10-16 09:32:00.198191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.833 [2024-10-16 09:32:00.198201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.833 [2024-10-16 09:32:00.202016] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.833 [2024-10-16 09:32:00.202062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.833 [2024-10-16 09:32:00.202073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.833 [2024-10-16 09:32:00.205928] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.833 [2024-10-16 09:32:00.205988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.833 [2024-10-16 09:32:00.205999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.833 [2024-10-16 09:32:00.209807] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.833 [2024-10-16 09:32:00.209853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.833 [2024-10-16 09:32:00.209864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.833 [2024-10-16 09:32:00.213682] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.833 [2024-10-16 09:32:00.213728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.833 [2024-10-16 09:32:00.213739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.833 [2024-10-16 09:32:00.217407] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.833 [2024-10-16 09:32:00.217454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.833 [2024-10-16 09:32:00.217465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.833 [2024-10-16 09:32:00.221185] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.833 [2024-10-16 09:32:00.221233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.833 [2024-10-16 09:32:00.221244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.833 [2024-10-16 09:32:00.224993] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.833 [2024-10-16 09:32:00.225039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.833 [2024-10-16 09:32:00.225049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.833 [2024-10-16 09:32:00.228866] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.833 [2024-10-16 09:32:00.228910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.833 [2024-10-16 09:32:00.228921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.833 [2024-10-16 09:32:00.233076] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:35.833 [2024-10-16 09:32:00.233124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.833 [2024-10-16 09:32:00.233135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.096 [2024-10-16 09:32:00.237335] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.096 [2024-10-16 09:32:00.237369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.096 [2024-10-16 09:32:00.237381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.096 [2024-10-16 09:32:00.241361] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.096 [2024-10-16 09:32:00.241423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.096 [2024-10-16 09:32:00.241435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.096 [2024-10-16 09:32:00.245420] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.096 [2024-10-16 09:32:00.245454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.096 [2024-10-16 09:32:00.245466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.096 [2024-10-16 09:32:00.249295] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.096 [2024-10-16 09:32:00.249344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.096 [2024-10-16 09:32:00.249355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.096 [2024-10-16 09:32:00.253148] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.096 [2024-10-16 09:32:00.253245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.096 [2024-10-16 09:32:00.253258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.096 [2024-10-16 09:32:00.257090] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.096 [2024-10-16 09:32:00.257122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.096 [2024-10-16 09:32:00.257133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.096 [2024-10-16 09:32:00.260974] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.096 [2024-10-16 09:32:00.261007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.096 [2024-10-16 09:32:00.261017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.096 [2024-10-16 09:32:00.264759] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.096 [2024-10-16 09:32:00.264805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.096 [2024-10-16 09:32:00.264815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.096 [2024-10-16 09:32:00.268626] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.096 [2024-10-16 09:32:00.268672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.096 [2024-10-16 09:32:00.268683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.097 [2024-10-16 09:32:00.272402] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.097 [2024-10-16 09:32:00.272448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.097 [2024-10-16 09:32:00.272459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.097 [2024-10-16 09:32:00.276247] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.097 [2024-10-16 09:32:00.276294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.097 [2024-10-16 09:32:00.276304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.097 [2024-10-16 09:32:00.280128] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.097 [2024-10-16 09:32:00.280174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.097 [2024-10-16 09:32:00.280185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.097 [2024-10-16 09:32:00.284064] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.097 [2024-10-16 09:32:00.284110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.097 [2024-10-16 09:32:00.284121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.097 [2024-10-16 09:32:00.287811] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.097 [2024-10-16 09:32:00.287856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.097 [2024-10-16 09:32:00.287867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.097 [2024-10-16 09:32:00.291663] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.097 [2024-10-16 09:32:00.291708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.097 [2024-10-16 09:32:00.291718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.097 [2024-10-16 09:32:00.295501] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.097 [2024-10-16 09:32:00.295548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.097 [2024-10-16 09:32:00.295570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.097 [2024-10-16 09:32:00.299310] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.097 [2024-10-16 09:32:00.299356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.097 [2024-10-16 09:32:00.299366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.097 [2024-10-16 09:32:00.303607] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.097 [2024-10-16 09:32:00.303662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.097 [2024-10-16 09:32:00.303673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.097 [2024-10-16 09:32:00.307849] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.097 [2024-10-16 09:32:00.307894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.097 [2024-10-16 09:32:00.307905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.097 [2024-10-16 09:32:00.311696] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.097 [2024-10-16 09:32:00.311741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.097 [2024-10-16 09:32:00.311751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.097 7797.00 IOPS, 974.62 MiB/s [2024-10-16T09:32:00.501Z] [2024-10-16 09:32:00.316771] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.097 [2024-10-16 09:32:00.316817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.097 [2024-10-16 09:32:00.316828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.097 [2024-10-16 09:32:00.320643] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.097 [2024-10-16 09:32:00.320688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.097 [2024-10-16 09:32:00.320698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.097 [2024-10-16 09:32:00.324451] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.097 [2024-10-16 09:32:00.324498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.097 [2024-10-16 09:32:00.324509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.097 [2024-10-16 09:32:00.328436] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.097 [2024-10-16 09:32:00.328483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.097 [2024-10-16 09:32:00.328494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.097 [2024-10-16 09:32:00.332382] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.097 [2024-10-16 09:32:00.332428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.097 [2024-10-16 09:32:00.332439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.097 [2024-10-16 09:32:00.336322] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.097 [2024-10-16 09:32:00.336368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.097 [2024-10-16 09:32:00.336379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.097 [2024-10-16 09:32:00.340253] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.097 [2024-10-16 09:32:00.340300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.097 [2024-10-16 09:32:00.340311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.097 [2024-10-16 09:32:00.344132] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.097 [2024-10-16 09:32:00.344179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.097 [2024-10-16 09:32:00.344190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.097 [2024-10-16 09:32:00.347929] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.097 [2024-10-16 09:32:00.347974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.097 [2024-10-16 09:32:00.347985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.097 [2024-10-16 09:32:00.351822] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.097 [2024-10-16 09:32:00.351868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.097 [2024-10-16 09:32:00.351879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.097 [2024-10-16 09:32:00.355701] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.097 [2024-10-16 09:32:00.355746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.097 [2024-10-16 09:32:00.355756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.097 [2024-10-16 09:32:00.359611] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.097 [2024-10-16 09:32:00.359656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.097 [2024-10-16 09:32:00.359667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.097 [2024-10-16 09:32:00.363425] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.097 [2024-10-16 09:32:00.363471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.097 [2024-10-16 09:32:00.363482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.097 [2024-10-16 09:32:00.367382] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.097 [2024-10-16 09:32:00.367429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.097 [2024-10-16 09:32:00.367440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.097 [2024-10-16 09:32:00.371206] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.097 [2024-10-16 09:32:00.371251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.097 [2024-10-16 09:32:00.371262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.097 [2024-10-16 09:32:00.375130] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.097 [2024-10-16 09:32:00.375176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.097 [2024-10-16 09:32:00.375187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.097 [2024-10-16 09:32:00.379069] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.097 [2024-10-16 09:32:00.379115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.098 [2024-10-16 09:32:00.379126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.098 [2024-10-16 09:32:00.382925] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.098 [2024-10-16 09:32:00.382985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.098 [2024-10-16 09:32:00.382996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.098 [2024-10-16 09:32:00.386839] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.098 [2024-10-16 09:32:00.386887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.098 [2024-10-16 09:32:00.386897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.098 [2024-10-16 09:32:00.390713] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.098 [2024-10-16 09:32:00.390759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.098 [2024-10-16 09:32:00.390770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.098 [2024-10-16 09:32:00.394622] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.098 [2024-10-16 09:32:00.394668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.098 [2024-10-16 09:32:00.394679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.098 [2024-10-16 09:32:00.398501] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.098 [2024-10-16 09:32:00.398547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.098 [2024-10-16 09:32:00.398585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.098 [2024-10-16 09:32:00.402364] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.098 [2024-10-16 09:32:00.402410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.098 [2024-10-16 09:32:00.402421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.098 [2024-10-16 09:32:00.406251] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.098 [2024-10-16 09:32:00.406297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.098 [2024-10-16 09:32:00.406308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.098 [2024-10-16 09:32:00.410116] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.098 [2024-10-16 09:32:00.410162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.098 [2024-10-16 09:32:00.410174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.098 [2024-10-16 09:32:00.414045] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.098 [2024-10-16 09:32:00.414092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.098 [2024-10-16 09:32:00.414103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.098 [2024-10-16 09:32:00.417851] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.098 [2024-10-16 09:32:00.417896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.098 [2024-10-16 09:32:00.417907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.098 [2024-10-16 09:32:00.421694] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.098 [2024-10-16 09:32:00.421739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.098 [2024-10-16 09:32:00.421749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.098 [2024-10-16 09:32:00.425458] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.098 [2024-10-16 09:32:00.425507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.098 [2024-10-16 09:32:00.425532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.098 [2024-10-16 09:32:00.429296] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.098 [2024-10-16 09:32:00.429329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.098 [2024-10-16 09:32:00.429340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.098 [2024-10-16 09:32:00.433128] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.098 [2024-10-16 09:32:00.433182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.098 [2024-10-16 09:32:00.433225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.098 [2024-10-16 09:32:00.437040] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.098 [2024-10-16 09:32:00.437086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.098 [2024-10-16 09:32:00.437097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.098 [2024-10-16 09:32:00.440829] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.098 [2024-10-16 09:32:00.440875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.098 [2024-10-16 09:32:00.440886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.098 [2024-10-16 09:32:00.444658] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.098 [2024-10-16 09:32:00.444704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.098 [2024-10-16 09:32:00.444715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.098 [2024-10-16 09:32:00.448421] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.098 [2024-10-16 09:32:00.448467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.098 [2024-10-16 09:32:00.448478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.098 [2024-10-16 09:32:00.452346] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.098 [2024-10-16 09:32:00.452393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.098 [2024-10-16 09:32:00.452404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.098 [2024-10-16 09:32:00.456216] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.098 [2024-10-16 09:32:00.456262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.098 [2024-10-16 09:32:00.456273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.098 [2024-10-16 09:32:00.460088] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.098 [2024-10-16 09:32:00.460135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.098 [2024-10-16 09:32:00.460145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.098 [2024-10-16 09:32:00.463999] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.098 [2024-10-16 09:32:00.464045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.098 [2024-10-16 09:32:00.464056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.098 [2024-10-16 09:32:00.467874] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.098 [2024-10-16 09:32:00.467920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.098 [2024-10-16 09:32:00.467931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.098 [2024-10-16 09:32:00.471668] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.098 [2024-10-16 09:32:00.471712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.098 [2024-10-16 09:32:00.471722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.098 [2024-10-16 09:32:00.475552] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.098 [2024-10-16 09:32:00.475607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.098 [2024-10-16 09:32:00.475618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.098 [2024-10-16 09:32:00.479390] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.098 [2024-10-16 09:32:00.479437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.098 [2024-10-16 09:32:00.479448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.098 [2024-10-16 09:32:00.483284] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.098 [2024-10-16 09:32:00.483330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.098 [2024-10-16 09:32:00.483340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.098 [2024-10-16 09:32:00.487120] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.098 [2024-10-16 09:32:00.487167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.098 [2024-10-16 09:32:00.487177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.098 [2024-10-16 09:32:00.490998] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.099 [2024-10-16 09:32:00.491044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.099 [2024-10-16 09:32:00.491055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.099 [2024-10-16 09:32:00.495117] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.099 [2024-10-16 09:32:00.495163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.099 [2024-10-16 09:32:00.495174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.099 [2024-10-16 09:32:00.499518] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.099 [2024-10-16 09:32:00.499567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.099 [2024-10-16 09:32:00.499580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.377 [2024-10-16 09:32:00.503888] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.377 [2024-10-16 09:32:00.503938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.377 [2024-10-16 09:32:00.503950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.377 [2024-10-16 09:32:00.508225] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.377 [2024-10-16 09:32:00.508274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.377 [2024-10-16 09:32:00.508286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.377 [2024-10-16 09:32:00.512510] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.377 [2024-10-16 09:32:00.512570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.377 [2024-10-16 09:32:00.512583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.377 [2024-10-16 09:32:00.516911] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.377 [2024-10-16 09:32:00.516957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.377 [2024-10-16 09:32:00.516970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.377 [2024-10-16 09:32:00.521000] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.377 [2024-10-16 09:32:00.521049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.377 [2024-10-16 09:32:00.521060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.377 [2024-10-16 09:32:00.525129] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.377 [2024-10-16 09:32:00.525185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.377 [2024-10-16 09:32:00.525215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.377 [2024-10-16 09:32:00.529024] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.377 [2024-10-16 09:32:00.529071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.377 [2024-10-16 09:32:00.529082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.377 [2024-10-16 09:32:00.532953] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.377 [2024-10-16 09:32:00.533001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.377 [2024-10-16 09:32:00.533012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.377 [2024-10-16 09:32:00.536896] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.377 [2024-10-16 09:32:00.536943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.377 [2024-10-16 09:32:00.536954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.377 [2024-10-16 09:32:00.540766] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.377 [2024-10-16 09:32:00.540813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.377 [2024-10-16 09:32:00.540824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.377 [2024-10-16 09:32:00.544530] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.378 [2024-10-16 09:32:00.544586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.378 [2024-10-16 09:32:00.544597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.378 [2024-10-16 09:32:00.548356] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.378 [2024-10-16 09:32:00.548402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.378 [2024-10-16 09:32:00.548413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.378 [2024-10-16 09:32:00.552359] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.378 [2024-10-16 09:32:00.552406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.378 [2024-10-16 09:32:00.552417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.378 [2024-10-16 09:32:00.556260] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.378 [2024-10-16 09:32:00.556307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.378 [2024-10-16 09:32:00.556319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.378 [2024-10-16 09:32:00.560256] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.378 [2024-10-16 09:32:00.560302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.378 [2024-10-16 09:32:00.560313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.378 [2024-10-16 09:32:00.564129] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.378 [2024-10-16 09:32:00.564175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.378 [2024-10-16 09:32:00.564187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.378 [2024-10-16 09:32:00.568087] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.378 [2024-10-16 09:32:00.568133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.378 [2024-10-16 09:32:00.568144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.378 [2024-10-16 09:32:00.571949] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.378 [2024-10-16 09:32:00.571996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.378 [2024-10-16 09:32:00.572007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.378 [2024-10-16 09:32:00.575896] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.378 [2024-10-16 09:32:00.575942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.378 [2024-10-16 09:32:00.575953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.378 [2024-10-16 09:32:00.579831] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.378 [2024-10-16 09:32:00.579877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.378 [2024-10-16 09:32:00.579888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.378 [2024-10-16 09:32:00.583740] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.378 [2024-10-16 09:32:00.583786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.378 [2024-10-16 09:32:00.583798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.378 [2024-10-16 09:32:00.587747] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.378 [2024-10-16 09:32:00.587794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.378 [2024-10-16 09:32:00.587805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.378 [2024-10-16 09:32:00.591683] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.378 [2024-10-16 09:32:00.591728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.378 [2024-10-16 09:32:00.591739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.378 [2024-10-16 09:32:00.595547] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.378 [2024-10-16 09:32:00.595604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.378 [2024-10-16 09:32:00.595615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.378 [2024-10-16 09:32:00.599396] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.378 [2024-10-16 09:32:00.599442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.378 [2024-10-16 09:32:00.599452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.378 [2024-10-16 09:32:00.603366] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.378 [2024-10-16 09:32:00.603412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.378 [2024-10-16 09:32:00.603423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.378 [2024-10-16 09:32:00.607219] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.378 [2024-10-16 09:32:00.607265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.378 [2024-10-16 09:32:00.607275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.378 [2024-10-16 09:32:00.611439] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.378 [2024-10-16 09:32:00.611488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.378 [2024-10-16 09:32:00.611500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.378 [2024-10-16 09:32:00.615631] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.378 [2024-10-16 09:32:00.615678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.378 [2024-10-16 09:32:00.615690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.378 [2024-10-16 09:32:00.619858] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.378 [2024-10-16 09:32:00.619908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.378 [2024-10-16 09:32:00.619920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.378 [2024-10-16 09:32:00.624082] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.378 [2024-10-16 09:32:00.624129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.378 [2024-10-16 09:32:00.624140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.378 [2024-10-16 09:32:00.628105] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.378 [2024-10-16 09:32:00.628153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.378 [2024-10-16 09:32:00.628164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.378 [2024-10-16 09:32:00.632148] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.378 [2024-10-16 09:32:00.632195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.378 [2024-10-16 09:32:00.632206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.378 [2024-10-16 09:32:00.636098] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.378 [2024-10-16 09:32:00.636144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.378 [2024-10-16 09:32:00.636155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.378 [2024-10-16 09:32:00.640079] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.378 [2024-10-16 09:32:00.640125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.378 [2024-10-16 09:32:00.640135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.378 [2024-10-16 09:32:00.643969] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.378 [2024-10-16 09:32:00.644015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.378 [2024-10-16 09:32:00.644025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.378 [2024-10-16 09:32:00.648453] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.378 [2024-10-16 09:32:00.648501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.378 [2024-10-16 09:32:00.648512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.378 [2024-10-16 09:32:00.652692] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.378 [2024-10-16 09:32:00.652740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.378 [2024-10-16 09:32:00.652752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.378 [2024-10-16 09:32:00.656793] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.379 [2024-10-16 09:32:00.656828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.379 [2024-10-16 09:32:00.656839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.379 [2024-10-16 09:32:00.661291] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.379 [2024-10-16 09:32:00.661330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.379 [2024-10-16 09:32:00.661343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.379 [2024-10-16 09:32:00.665792] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.379 [2024-10-16 09:32:00.665842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.379 [2024-10-16 09:32:00.665854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.379 [2024-10-16 09:32:00.670292] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.379 [2024-10-16 09:32:00.670339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.379 [2024-10-16 09:32:00.670350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.379 [2024-10-16 09:32:00.674570] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.379 [2024-10-16 09:32:00.674645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.379 [2024-10-16 09:32:00.674658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.379 [2024-10-16 09:32:00.679081] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.379 [2024-10-16 09:32:00.679113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.379 [2024-10-16 09:32:00.679124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.379 [2024-10-16 09:32:00.683315] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.379 [2024-10-16 09:32:00.683362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.379 [2024-10-16 09:32:00.683374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.379 [2024-10-16 09:32:00.687565] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.379 [2024-10-16 09:32:00.687622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.379 [2024-10-16 09:32:00.687633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.379 [2024-10-16 09:32:00.691574] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.379 [2024-10-16 09:32:00.691633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.379 [2024-10-16 09:32:00.691644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.379 [2024-10-16 09:32:00.695573] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.379 [2024-10-16 09:32:00.695619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.379 [2024-10-16 09:32:00.695629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.379 [2024-10-16 09:32:00.699690] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.379 [2024-10-16 09:32:00.699735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.379 [2024-10-16 09:32:00.699746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.379 [2024-10-16 09:32:00.703703] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.379 [2024-10-16 09:32:00.703749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.379 [2024-10-16 09:32:00.703760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.379 [2024-10-16 09:32:00.707637] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.379 [2024-10-16 09:32:00.707683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.379 [2024-10-16 09:32:00.707694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.379 [2024-10-16 09:32:00.711785] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.379 [2024-10-16 09:32:00.711832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.379 [2024-10-16 09:32:00.711843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.379 [2024-10-16 09:32:00.715776] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.379 [2024-10-16 09:32:00.715823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.379 [2024-10-16 09:32:00.715834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.379 [2024-10-16 09:32:00.719719] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.379 [2024-10-16 09:32:00.719766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.379 [2024-10-16 09:32:00.719777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.379 [2024-10-16 09:32:00.723908] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.379 [2024-10-16 09:32:00.723954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.379 [2024-10-16 09:32:00.723966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.379 [2024-10-16 09:32:00.727886] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.379 [2024-10-16 09:32:00.727932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.379 [2024-10-16 09:32:00.727943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.379 [2024-10-16 09:32:00.731858] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.379 [2024-10-16 09:32:00.731904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.379 [2024-10-16 09:32:00.731915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.379 [2024-10-16 09:32:00.735945] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.379 [2024-10-16 09:32:00.735976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.379 [2024-10-16 09:32:00.735987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.379 [2024-10-16 09:32:00.739927] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.379 [2024-10-16 09:32:00.739990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.379 [2024-10-16 09:32:00.740001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.379 [2024-10-16 09:32:00.743930] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.379 [2024-10-16 09:32:00.743991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.379 [2024-10-16 09:32:00.744002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.379 [2024-10-16 09:32:00.747902] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.379 [2024-10-16 09:32:00.747948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.379 [2024-10-16 09:32:00.747959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.379 [2024-10-16 09:32:00.752070] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.379 [2024-10-16 09:32:00.752118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.379 [2024-10-16 09:32:00.752129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.379 [2024-10-16 09:32:00.756065] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.379 [2024-10-16 09:32:00.756112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.379 [2024-10-16 09:32:00.756123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.379 [2024-10-16 09:32:00.759998] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.379 [2024-10-16 09:32:00.760045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.379 [2024-10-16 09:32:00.760055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.379 [2024-10-16 09:32:00.764096] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.379 [2024-10-16 09:32:00.764130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.379 [2024-10-16 09:32:00.764141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.379 [2024-10-16 09:32:00.768043] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.379 [2024-10-16 09:32:00.768075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.379 [2024-10-16 09:32:00.768086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.379 [2024-10-16 09:32:00.772004] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.379 [2024-10-16 09:32:00.772051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.379 [2024-10-16 09:32:00.772062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.380 [2024-10-16 09:32:00.776244] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.380 [2024-10-16 09:32:00.776279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.380 [2024-10-16 09:32:00.776290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.380 [2024-10-16 09:32:00.780690] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.380 [2024-10-16 09:32:00.780740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.380 [2024-10-16 09:32:00.780752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.640 [2024-10-16 09:32:00.784713] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.640 [2024-10-16 09:32:00.784759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.640 [2024-10-16 09:32:00.784770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.640 [2024-10-16 09:32:00.789230] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.640 [2024-10-16 09:32:00.789280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.640 [2024-10-16 09:32:00.789293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.640 [2024-10-16 09:32:00.793284] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.640 [2024-10-16 09:32:00.793320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.640 [2024-10-16 09:32:00.793332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.640 [2024-10-16 09:32:00.797259] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.640 [2024-10-16 09:32:00.797307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.640 [2024-10-16 09:32:00.797320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.640 [2024-10-16 09:32:00.801662] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.640 [2024-10-16 09:32:00.801710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.640 [2024-10-16 09:32:00.801721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.641 [2024-10-16 09:32:00.805712] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.641 [2024-10-16 09:32:00.805759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.641 [2024-10-16 09:32:00.805770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.641 [2024-10-16 09:32:00.809763] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.641 [2024-10-16 09:32:00.809809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.641 [2024-10-16 09:32:00.809820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.641 [2024-10-16 09:32:00.813536] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.641 [2024-10-16 09:32:00.813627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.641 [2024-10-16 09:32:00.813639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.641 [2024-10-16 09:32:00.817429] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.641 [2024-10-16 09:32:00.817463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.641 [2024-10-16 09:32:00.817475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.641 [2024-10-16 09:32:00.821317] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.641 [2024-10-16 09:32:00.821365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.641 [2024-10-16 09:32:00.821376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.641 [2024-10-16 09:32:00.825262] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.641 [2024-10-16 09:32:00.825295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.641 [2024-10-16 09:32:00.825307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.641 [2024-10-16 09:32:00.829260] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.641 [2024-10-16 09:32:00.829295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.641 [2024-10-16 09:32:00.829306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.641 [2024-10-16 09:32:00.833026] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.641 [2024-10-16 09:32:00.833072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.641 [2024-10-16 09:32:00.833083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.641 [2024-10-16 09:32:00.836841] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.641 [2024-10-16 09:32:00.836887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.641 [2024-10-16 09:32:00.836898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.641 [2024-10-16 09:32:00.840759] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.641 [2024-10-16 09:32:00.840788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.641 [2024-10-16 09:32:00.840798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.641 [2024-10-16 09:32:00.844711] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.641 [2024-10-16 09:32:00.844743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.641 [2024-10-16 09:32:00.844754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.641 [2024-10-16 09:32:00.848785] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.641 [2024-10-16 09:32:00.848820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.641 [2024-10-16 09:32:00.848832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.641 [2024-10-16 09:32:00.852991] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.641 [2024-10-16 09:32:00.853025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.641 [2024-10-16 09:32:00.853037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.641 [2024-10-16 09:32:00.857098] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.641 [2024-10-16 09:32:00.857146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.641 [2024-10-16 09:32:00.857157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.641 [2024-10-16 09:32:00.861221] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.641 [2024-10-16 09:32:00.861254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.641 [2024-10-16 09:32:00.861266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.641 [2024-10-16 09:32:00.865287] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.641 [2024-10-16 09:32:00.865321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.641 [2024-10-16 09:32:00.865332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.641 [2024-10-16 09:32:00.869073] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.641 [2024-10-16 09:32:00.869119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.641 [2024-10-16 09:32:00.869130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.641 [2024-10-16 09:32:00.872933] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.641 [2024-10-16 09:32:00.872978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.641 [2024-10-16 09:32:00.872988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.641 [2024-10-16 09:32:00.876756] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.641 [2024-10-16 09:32:00.876801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.641 [2024-10-16 09:32:00.876812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.641 [2024-10-16 09:32:00.880580] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.641 [2024-10-16 09:32:00.880625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.641 [2024-10-16 09:32:00.880635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.641 [2024-10-16 09:32:00.884346] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.641 [2024-10-16 09:32:00.884393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.641 [2024-10-16 09:32:00.884403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.641 [2024-10-16 09:32:00.888278] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.641 [2024-10-16 09:32:00.888311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.641 [2024-10-16 09:32:00.888322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.641 [2024-10-16 09:32:00.892099] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.641 [2024-10-16 09:32:00.892145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.641 [2024-10-16 09:32:00.892156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.641 [2024-10-16 09:32:00.896026] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.641 [2024-10-16 09:32:00.896072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.641 [2024-10-16 09:32:00.896083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.641 [2024-10-16 09:32:00.900115] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.641 [2024-10-16 09:32:00.900161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.641 [2024-10-16 09:32:00.900171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.641 [2024-10-16 09:32:00.904405] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.641 [2024-10-16 09:32:00.904451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.641 [2024-10-16 09:32:00.904462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.642 [2024-10-16 09:32:00.908525] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.642 [2024-10-16 09:32:00.908596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.642 [2024-10-16 09:32:00.908607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.642 [2024-10-16 09:32:00.912386] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.642 [2024-10-16 09:32:00.912432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.642 [2024-10-16 09:32:00.912443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.642 [2024-10-16 09:32:00.916294] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.642 [2024-10-16 09:32:00.916342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.642 [2024-10-16 09:32:00.916352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.642 [2024-10-16 09:32:00.920215] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.642 [2024-10-16 09:32:00.920248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.642 [2024-10-16 09:32:00.920258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.642 [2024-10-16 09:32:00.924182] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.642 [2024-10-16 09:32:00.924214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.642 [2024-10-16 09:32:00.924225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.642 [2024-10-16 09:32:00.928018] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.642 [2024-10-16 09:32:00.928064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.642 [2024-10-16 09:32:00.928075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.642 [2024-10-16 09:32:00.931870] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.642 [2024-10-16 09:32:00.931915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.642 [2024-10-16 09:32:00.931926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.642 [2024-10-16 09:32:00.935723] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.642 [2024-10-16 09:32:00.935769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.642 [2024-10-16 09:32:00.935780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.642 [2024-10-16 09:32:00.939524] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.642 [2024-10-16 09:32:00.939579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.642 [2024-10-16 09:32:00.939590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.642 [2024-10-16 09:32:00.943391] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.642 [2024-10-16 09:32:00.943437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.642 [2024-10-16 09:32:00.943448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.642 [2024-10-16 09:32:00.947235] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.642 [2024-10-16 09:32:00.947282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.642 [2024-10-16 09:32:00.947292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.642 [2024-10-16 09:32:00.951181] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.642 [2024-10-16 09:32:00.951228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.642 [2024-10-16 09:32:00.951239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.642 [2024-10-16 09:32:00.955115] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.642 [2024-10-16 09:32:00.955161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.642 [2024-10-16 09:32:00.955172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.642 [2024-10-16 09:32:00.959016] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.642 [2024-10-16 09:32:00.959062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.642 [2024-10-16 09:32:00.959073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.642 [2024-10-16 09:32:00.962896] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.642 [2024-10-16 09:32:00.962942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.642 [2024-10-16 09:32:00.962953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.642 [2024-10-16 09:32:00.966770] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.642 [2024-10-16 09:32:00.966816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.642 [2024-10-16 09:32:00.966826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.642 [2024-10-16 09:32:00.970532] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.642 [2024-10-16 09:32:00.970589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.642 [2024-10-16 09:32:00.970600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.642 [2024-10-16 09:32:00.974376] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.642 [2024-10-16 09:32:00.974423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.642 [2024-10-16 09:32:00.974433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.642 [2024-10-16 09:32:00.978211] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.642 [2024-10-16 09:32:00.978257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.642 [2024-10-16 09:32:00.978268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.642 [2024-10-16 09:32:00.982033] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.642 [2024-10-16 09:32:00.982078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.642 [2024-10-16 09:32:00.982088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.642 [2024-10-16 09:32:00.985849] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.642 [2024-10-16 09:32:00.985896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.642 [2024-10-16 09:32:00.985907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.642 [2024-10-16 09:32:00.989733] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.642 [2024-10-16 09:32:00.989780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.642 [2024-10-16 09:32:00.989790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.642 [2024-10-16 09:32:00.993356] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.642 [2024-10-16 09:32:00.993404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.642 [2024-10-16 09:32:00.993415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.642 [2024-10-16 09:32:00.997226] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.642 [2024-10-16 09:32:00.997259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.642 [2024-10-16 09:32:00.997271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.642 [2024-10-16 09:32:01.001004] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.642 [2024-10-16 09:32:01.001050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.643 [2024-10-16 09:32:01.001060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.643 [2024-10-16 09:32:01.005022] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.643 [2024-10-16 09:32:01.005055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.643 [2024-10-16 09:32:01.005066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.643 [2024-10-16 09:32:01.008826] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.643 [2024-10-16 09:32:01.008871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.643 [2024-10-16 09:32:01.008881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.643 [2024-10-16 09:32:01.012653] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.643 [2024-10-16 09:32:01.012698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.643 [2024-10-16 09:32:01.012708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.643 [2024-10-16 09:32:01.016427] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.643 [2024-10-16 09:32:01.016474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.643 [2024-10-16 09:32:01.016485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.643 [2024-10-16 09:32:01.020244] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.643 [2024-10-16 09:32:01.020290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.643 [2024-10-16 09:32:01.020301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.643 [2024-10-16 09:32:01.024108] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.643 [2024-10-16 09:32:01.024154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.643 [2024-10-16 09:32:01.024164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.643 [2024-10-16 09:32:01.027931] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.643 [2024-10-16 09:32:01.027978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.643 [2024-10-16 09:32:01.027989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.643 [2024-10-16 09:32:01.031701] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.643 [2024-10-16 09:32:01.031747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.643 [2024-10-16 09:32:01.031758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.643 [2024-10-16 09:32:01.035507] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.643 [2024-10-16 09:32:01.035563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.643 [2024-10-16 09:32:01.035576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.643 [2024-10-16 09:32:01.039382] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.643 [2024-10-16 09:32:01.039429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.643 [2024-10-16 09:32:01.039440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.643 [2024-10-16 09:32:01.043663] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.643 [2024-10-16 09:32:01.043708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.643 [2024-10-16 09:32:01.043719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.903 [2024-10-16 09:32:01.047728] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.903 [2024-10-16 09:32:01.047774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.903 [2024-10-16 09:32:01.047785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.903 [2024-10-16 09:32:01.051963] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.903 [2024-10-16 09:32:01.052010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.904 [2024-10-16 09:32:01.052021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.904 [2024-10-16 09:32:01.055948] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.904 [2024-10-16 09:32:01.055996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.904 [2024-10-16 09:32:01.056007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.904 [2024-10-16 09:32:01.059885] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.904 [2024-10-16 09:32:01.059930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.904 [2024-10-16 09:32:01.059941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.904 [2024-10-16 09:32:01.063779] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.904 [2024-10-16 09:32:01.063825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.904 [2024-10-16 09:32:01.063836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.904 [2024-10-16 09:32:01.067649] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.904 [2024-10-16 09:32:01.067695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.904 [2024-10-16 09:32:01.067706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.904 [2024-10-16 09:32:01.071495] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.904 [2024-10-16 09:32:01.071540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.904 [2024-10-16 09:32:01.071563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.904 [2024-10-16 09:32:01.075324] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.904 [2024-10-16 09:32:01.075371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.904 [2024-10-16 09:32:01.075381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.904 [2024-10-16 09:32:01.079384] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.904 [2024-10-16 09:32:01.079431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.904 [2024-10-16 09:32:01.079441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.904 [2024-10-16 09:32:01.083302] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.904 [2024-10-16 09:32:01.083348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.904 [2024-10-16 09:32:01.083359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.904 [2024-10-16 09:32:01.087170] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.904 [2024-10-16 09:32:01.087216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.904 [2024-10-16 09:32:01.087226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.904 [2024-10-16 09:32:01.091038] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.904 [2024-10-16 09:32:01.091083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.904 [2024-10-16 09:32:01.091094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.904 [2024-10-16 09:32:01.094849] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.904 [2024-10-16 09:32:01.094896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.904 [2024-10-16 09:32:01.094907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.904 [2024-10-16 09:32:01.099077] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.904 [2024-10-16 09:32:01.099124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.904 [2024-10-16 09:32:01.099135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.904 [2024-10-16 09:32:01.103438] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.904 [2024-10-16 09:32:01.103484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.904 [2024-10-16 09:32:01.103495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.904 [2024-10-16 09:32:01.108057] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.904 [2024-10-16 09:32:01.108105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.904 [2024-10-16 09:32:01.108117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.904 [2024-10-16 09:32:01.112359] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.904 [2024-10-16 09:32:01.112406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.904 [2024-10-16 09:32:01.112417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.904 [2024-10-16 09:32:01.116930] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.904 [2024-10-16 09:32:01.116993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.904 [2024-10-16 09:32:01.117018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.904 [2024-10-16 09:32:01.121339] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.904 [2024-10-16 09:32:01.121374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.904 [2024-10-16 09:32:01.121387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.904 [2024-10-16 09:32:01.125810] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.904 [2024-10-16 09:32:01.125845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.904 [2024-10-16 09:32:01.125857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.904 [2024-10-16 09:32:01.130180] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.904 [2024-10-16 09:32:01.130227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.904 [2024-10-16 09:32:01.130237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.904 [2024-10-16 09:32:01.134446] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.904 [2024-10-16 09:32:01.134492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.904 [2024-10-16 09:32:01.134503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.904 [2024-10-16 09:32:01.138787] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.904 [2024-10-16 09:32:01.138835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.904 [2024-10-16 09:32:01.138846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.904 [2024-10-16 09:32:01.143133] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.904 [2024-10-16 09:32:01.143178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.904 [2024-10-16 09:32:01.143189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.904 [2024-10-16 09:32:01.147007] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.904 [2024-10-16 09:32:01.147054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.904 [2024-10-16 09:32:01.147065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.904 [2024-10-16 09:32:01.150931] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.904 [2024-10-16 09:32:01.150961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.904 [2024-10-16 09:32:01.150972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.904 [2024-10-16 09:32:01.154880] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.904 [2024-10-16 09:32:01.154925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.904 [2024-10-16 09:32:01.154936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.904 [2024-10-16 09:32:01.158981] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.904 [2024-10-16 09:32:01.159013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.904 [2024-10-16 09:32:01.159024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.904 [2024-10-16 09:32:01.163177] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.904 [2024-10-16 09:32:01.163223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.904 [2024-10-16 09:32:01.163233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.904 [2024-10-16 09:32:01.167429] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.904 [2024-10-16 09:32:01.167475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.905 [2024-10-16 09:32:01.167486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.905 [2024-10-16 09:32:01.171326] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.905 [2024-10-16 09:32:01.171372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.905 [2024-10-16 09:32:01.171382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.905 [2024-10-16 09:32:01.175388] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.905 [2024-10-16 09:32:01.175434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.905 [2024-10-16 09:32:01.175445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.905 [2024-10-16 09:32:01.179363] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.905 [2024-10-16 09:32:01.179409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.905 [2024-10-16 09:32:01.179420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.905 [2024-10-16 09:32:01.183257] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.905 [2024-10-16 09:32:01.183303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.905 [2024-10-16 09:32:01.183314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.905 [2024-10-16 09:32:01.187184] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.905 [2024-10-16 09:32:01.187229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.905 [2024-10-16 09:32:01.187240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.905 [2024-10-16 09:32:01.191070] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.905 [2024-10-16 09:32:01.191115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.905 [2024-10-16 09:32:01.191126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.905 [2024-10-16 09:32:01.194992] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.905 [2024-10-16 09:32:01.195038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.905 [2024-10-16 09:32:01.195049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.905 [2024-10-16 09:32:01.198890] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.905 [2024-10-16 09:32:01.198935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.905 [2024-10-16 09:32:01.198945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.905 [2024-10-16 09:32:01.202764] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.905 [2024-10-16 09:32:01.202810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.905 [2024-10-16 09:32:01.202836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.905 [2024-10-16 09:32:01.206650] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.905 [2024-10-16 09:32:01.206695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.905 [2024-10-16 09:32:01.206706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.905 [2024-10-16 09:32:01.210443] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.905 [2024-10-16 09:32:01.210489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.905 [2024-10-16 09:32:01.210499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.905 [2024-10-16 09:32:01.214423] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.905 [2024-10-16 09:32:01.214469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.905 [2024-10-16 09:32:01.214479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.905 [2024-10-16 09:32:01.218252] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.905 [2024-10-16 09:32:01.218298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.905 [2024-10-16 09:32:01.218309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.905 [2024-10-16 09:32:01.222159] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.905 [2024-10-16 09:32:01.222206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.905 [2024-10-16 09:32:01.222216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.905 [2024-10-16 09:32:01.226022] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.905 [2024-10-16 09:32:01.226068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.905 [2024-10-16 09:32:01.226079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.905 [2024-10-16 09:32:01.229886] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.905 [2024-10-16 09:32:01.229933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.905 [2024-10-16 09:32:01.229959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.905 [2024-10-16 09:32:01.233674] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.905 [2024-10-16 09:32:01.233720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.905 [2024-10-16 09:32:01.233731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.905 [2024-10-16 09:32:01.237473] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.905 [2024-10-16 09:32:01.237517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.905 [2024-10-16 09:32:01.237528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.905 [2024-10-16 09:32:01.241143] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.905 [2024-10-16 09:32:01.241239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.905 [2024-10-16 09:32:01.241251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.905 [2024-10-16 09:32:01.245039] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.905 [2024-10-16 09:32:01.245095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.905 [2024-10-16 09:32:01.245106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.905 [2024-10-16 09:32:01.248916] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.905 [2024-10-16 09:32:01.248962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.905 [2024-10-16 09:32:01.248973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.905 [2024-10-16 09:32:01.252904] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.905 [2024-10-16 09:32:01.252953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.905 [2024-10-16 09:32:01.252964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.905 [2024-10-16 09:32:01.256884] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.905 [2024-10-16 09:32:01.256917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.905 [2024-10-16 09:32:01.256928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.905 [2024-10-16 09:32:01.260823] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.905 [2024-10-16 09:32:01.260856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.905 [2024-10-16 09:32:01.260867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.905 [2024-10-16 09:32:01.264808] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.905 [2024-10-16 09:32:01.264840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.905 [2024-10-16 09:32:01.264851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.905 [2024-10-16 09:32:01.268604] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.905 [2024-10-16 09:32:01.268636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.905 [2024-10-16 09:32:01.268647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.905 [2024-10-16 09:32:01.272349] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.905 [2024-10-16 09:32:01.272395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.905 [2024-10-16 09:32:01.272406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.905 [2024-10-16 09:32:01.276187] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.905 [2024-10-16 09:32:01.276233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.905 [2024-10-16 09:32:01.276244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.905 [2024-10-16 09:32:01.280011] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.906 [2024-10-16 09:32:01.280057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.906 [2024-10-16 09:32:01.280068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.906 [2024-10-16 09:32:01.283865] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.906 [2024-10-16 09:32:01.283911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.906 [2024-10-16 09:32:01.283925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.906 [2024-10-16 09:32:01.287795] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.906 [2024-10-16 09:32:01.287840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.906 [2024-10-16 09:32:01.287851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:36.906 [2024-10-16 09:32:01.291643] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.906 [2024-10-16 09:32:01.291689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.906 [2024-10-16 09:32:01.291699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.906 [2024-10-16 09:32:01.295526] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.906 [2024-10-16 09:32:01.295582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.906 [2024-10-16 09:32:01.295593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:36.906 [2024-10-16 09:32:01.299307] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.906 [2024-10-16 09:32:01.299353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.906 [2024-10-16 09:32:01.299364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:36.906 [2024-10-16 09:32:01.303352] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:36.906 [2024-10-16 09:32:01.303401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:36.906 [2024-10-16 09:32:01.303412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.165 [2024-10-16 09:32:01.307572] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:37.165 [2024-10-16 09:32:01.307644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.165 [2024-10-16 09:32:01.307671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.165 [2024-10-16 09:32:01.311652] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:37.165 [2024-10-16 09:32:01.311697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.165 [2024-10-16 09:32:01.311708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.165 7789.00 IOPS, 973.62 MiB/s [2024-10-16T09:32:01.569Z] [2024-10-16 09:32:01.316761] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd4b2b0) 00:17:37.165 [2024-10-16 09:32:01.316806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:37.165 [2024-10-16 09:32:01.316817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.165 00:17:37.165 Latency(us) 00:17:37.165 [2024-10-16T09:32:01.569Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:37.165 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:37.165 nvme0n1 : 2.00 7785.89 973.24 0.00 0.00 2051.80 1675.64 5272.67 00:17:37.165 [2024-10-16T09:32:01.569Z] =================================================================================================================== 00:17:37.165 [2024-10-16T09:32:01.569Z] Total : 7785.89 973.24 0.00 0.00 2051.80 1675.64 5272.67 00:17:37.165 { 00:17:37.165 "results": [ 00:17:37.165 { 00:17:37.165 "job": "nvme0n1", 00:17:37.165 "core_mask": "0x2", 00:17:37.165 "workload": "randread", 00:17:37.165 "status": "finished", 00:17:37.165 "queue_depth": 16, 00:17:37.165 "io_size": 131072, 00:17:37.165 "runtime": 2.002853, 00:17:37.165 "iops": 7785.893423032045, 00:17:37.165 "mibps": 973.2366778790056, 00:17:37.165 "io_failed": 0, 00:17:37.165 "io_timeout": 0, 00:17:37.165 "avg_latency_us": 2051.797421385848, 00:17:37.165 "min_latency_us": 1675.6363636363637, 00:17:37.165 "max_latency_us": 5272.669090909091 00:17:37.165 } 00:17:37.165 ], 00:17:37.165 "core_count": 1 00:17:37.165 } 00:17:37.165 09:32:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:37.165 09:32:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:37.165 09:32:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:37.165 09:32:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:37.165 | .driver_specific 00:17:37.165 | .nvme_error 00:17:37.165 | .status_code 00:17:37.165 | .command_transient_transport_error' 00:17:37.424 09:32:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 503 > 0 )) 00:17:37.424 09:32:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79700 00:17:37.424 09:32:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 79700 ']' 00:17:37.424 09:32:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 79700 00:17:37.424 09:32:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:17:37.424 09:32:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:37.424 09:32:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79700 00:17:37.424 09:32:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:37.424 killing process with pid 79700 00:17:37.424 09:32:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:37.424 09:32:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79700' 00:17:37.424 Received shutdown signal, test time was about 2.000000 seconds 00:17:37.424 00:17:37.424 Latency(us) 00:17:37.424 [2024-10-16T09:32:01.828Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:37.424 [2024-10-16T09:32:01.828Z] =================================================================================================================== 00:17:37.424 [2024-10-16T09:32:01.828Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:37.424 09:32:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 79700 00:17:37.424 09:32:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 79700 00:17:37.683 09:32:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:17:37.683 09:32:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:37.683 09:32:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:17:37.683 09:32:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:17:37.684 09:32:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:17:37.684 09:32:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79751 00:17:37.684 09:32:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:17:37.684 09:32:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79751 /var/tmp/bperf.sock 00:17:37.684 09:32:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 79751 ']' 00:17:37.684 09:32:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:37.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:37.684 09:32:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:37.684 09:32:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:37.684 09:32:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:37.684 09:32:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:37.684 [2024-10-16 09:32:01.908408] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:17:37.684 [2024-10-16 09:32:01.908492] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79751 ] 00:17:37.684 [2024-10-16 09:32:02.041317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.684 [2024-10-16 09:32:02.084324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:37.943 [2024-10-16 09:32:02.139115] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:37.943 09:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:37.943 09:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:17:37.943 09:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:37.943 09:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:38.201 09:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:38.201 09:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.201 09:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:38.201 09:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.201 09:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:38.201 09:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:38.460 nvme0n1 00:17:38.460 09:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:38.460 09:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.460 09:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:38.460 09:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.460 09:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:38.461 09:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:38.720 Running I/O for 2 seconds... 00:17:38.720 [2024-10-16 09:32:02.946244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166fef90 00:17:38.720 [2024-10-16 09:32:02.948586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.720 [2024-10-16 09:32:02.948626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:38.720 [2024-10-16 09:32:02.960276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166feb58 00:17:38.720 [2024-10-16 09:32:02.962609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.720 [2024-10-16 09:32:02.962653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:38.720 [2024-10-16 09:32:02.973889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166fe2e8 00:17:38.720 [2024-10-16 09:32:02.976106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.720 [2024-10-16 09:32:02.976149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:38.720 [2024-10-16 09:32:02.987331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166fda78 00:17:38.720 [2024-10-16 09:32:02.989719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.720 [2024-10-16 09:32:02.989761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:38.720 [2024-10-16 09:32:03.000990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166fd208 00:17:38.720 [2024-10-16 09:32:03.003459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.720 [2024-10-16 09:32:03.003503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:38.720 [2024-10-16 09:32:03.016330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166fc998 00:17:38.720 [2024-10-16 09:32:03.018858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.720 [2024-10-16 09:32:03.018918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:38.720 [2024-10-16 09:32:03.031623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166fc128 00:17:38.720 [2024-10-16 09:32:03.034083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.720 [2024-10-16 09:32:03.034127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:38.720 [2024-10-16 09:32:03.046607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166fb8b8 00:17:38.720 [2024-10-16 09:32:03.048780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.720 [2024-10-16 09:32:03.048825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:38.720 [2024-10-16 09:32:03.060834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166fb048 00:17:38.720 [2024-10-16 09:32:03.063186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.720 [2024-10-16 09:32:03.063216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:38.720 [2024-10-16 09:32:03.076013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166fa7d8 00:17:38.720 [2024-10-16 09:32:03.078495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.720 [2024-10-16 09:32:03.078540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:38.720 [2024-10-16 09:32:03.090275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166f9f68 00:17:38.720 [2024-10-16 09:32:03.092432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.720 [2024-10-16 09:32:03.092476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:38.720 [2024-10-16 09:32:03.104531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166f96f8 00:17:38.720 [2024-10-16 09:32:03.106706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.720 [2024-10-16 09:32:03.106736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:38.720 [2024-10-16 09:32:03.118807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166f8e88 00:17:38.720 [2024-10-16 09:32:03.121016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.720 [2024-10-16 09:32:03.121063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:38.980 [2024-10-16 09:32:03.134093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166f8618 00:17:38.980 [2024-10-16 09:32:03.136176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.980 [2024-10-16 09:32:03.136220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:38.980 [2024-10-16 09:32:03.148642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166f7da8 00:17:38.980 [2024-10-16 09:32:03.151002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:10627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.980 [2024-10-16 09:32:03.151061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:38.980 [2024-10-16 09:32:03.164758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166f7538 00:17:38.980 [2024-10-16 09:32:03.167177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.980 [2024-10-16 09:32:03.167221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:38.980 [2024-10-16 09:32:03.180435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166f6cc8 00:17:38.980 [2024-10-16 09:32:03.182812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.980 [2024-10-16 09:32:03.182858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:38.980 [2024-10-16 09:32:03.195251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166f6458 00:17:38.980 [2024-10-16 09:32:03.197336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.980 [2024-10-16 09:32:03.197366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:38.980 [2024-10-16 09:32:03.208942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166f5be8 00:17:38.980 [2024-10-16 09:32:03.211011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.980 [2024-10-16 09:32:03.211039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:38.980 [2024-10-16 09:32:03.222437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166f5378 00:17:38.980 [2024-10-16 09:32:03.224458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.980 [2024-10-16 09:32:03.224486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:38.980 [2024-10-16 09:32:03.236166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166f4b08 00:17:38.980 [2024-10-16 09:32:03.238189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.980 [2024-10-16 09:32:03.238232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:38.980 [2024-10-16 09:32:03.249704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166f4298 00:17:38.980 [2024-10-16 09:32:03.251533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:18199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.980 [2024-10-16 09:32:03.251599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:38.980 [2024-10-16 09:32:03.263127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166f3a28 00:17:38.980 [2024-10-16 09:32:03.265045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:15929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.980 [2024-10-16 09:32:03.265074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:38.980 [2024-10-16 09:32:03.276861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166f31b8 00:17:38.980 [2024-10-16 09:32:03.278818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.980 [2024-10-16 09:32:03.278861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:38.980 [2024-10-16 09:32:03.290237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166f2948 00:17:38.980 [2024-10-16 09:32:03.292135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:25343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.980 [2024-10-16 09:32:03.292163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:38.980 [2024-10-16 09:32:03.304006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166f20d8 00:17:38.980 [2024-10-16 09:32:03.305904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:18857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.980 [2024-10-16 09:32:03.305947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:38.980 [2024-10-16 09:32:03.317426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166f1868 00:17:38.980 [2024-10-16 09:32:03.319314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:12068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.980 [2024-10-16 09:32:03.319355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:38.980 [2024-10-16 09:32:03.331487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166f0ff8 00:17:38.980 [2024-10-16 09:32:03.333647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.980 [2024-10-16 09:32:03.333692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:38.980 [2024-10-16 09:32:03.345388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166f0788 00:17:38.980 [2024-10-16 09:32:03.347262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.980 [2024-10-16 09:32:03.347304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:38.980 [2024-10-16 09:32:03.358986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166eff18 00:17:38.980 [2024-10-16 09:32:03.360782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.980 [2024-10-16 09:32:03.360810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:38.980 [2024-10-16 09:32:03.372605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166ef6a8 00:17:38.980 [2024-10-16 09:32:03.374459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.980 [2024-10-16 09:32:03.374501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:39.240 [2024-10-16 09:32:03.386705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166eee38 00:17:39.240 [2024-10-16 09:32:03.388613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.240 [2024-10-16 09:32:03.388695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:39.240 [2024-10-16 09:32:03.400418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166ee5c8 00:17:39.240 [2024-10-16 09:32:03.402265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.240 [2024-10-16 09:32:03.402293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:39.240 [2024-10-16 09:32:03.413989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166edd58 00:17:39.240 [2024-10-16 09:32:03.415675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.240 [2024-10-16 09:32:03.415717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:39.240 [2024-10-16 09:32:03.427225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166ed4e8 00:17:39.240 [2024-10-16 09:32:03.428984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.240 [2024-10-16 09:32:03.429026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:39.240 [2024-10-16 09:32:03.440879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166ecc78 00:17:39.240 [2024-10-16 09:32:03.442641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.240 [2024-10-16 09:32:03.442683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:39.240 [2024-10-16 09:32:03.454294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166ec408 00:17:39.240 [2024-10-16 09:32:03.456035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.240 [2024-10-16 09:32:03.456077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:39.240 [2024-10-16 09:32:03.467797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166ebb98 00:17:39.240 [2024-10-16 09:32:03.469462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.240 [2024-10-16 09:32:03.469507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:39.240 [2024-10-16 09:32:03.481035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166eb328 00:17:39.240 [2024-10-16 09:32:03.482747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.240 [2024-10-16 09:32:03.482790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:39.240 [2024-10-16 09:32:03.494503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166eaab8 00:17:39.240 [2024-10-16 09:32:03.496076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.240 [2024-10-16 09:32:03.496119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:39.240 [2024-10-16 09:32:03.507874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166ea248 00:17:39.240 [2024-10-16 09:32:03.509611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:6395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.240 [2024-10-16 09:32:03.509683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:39.240 [2024-10-16 09:32:03.521761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166e99d8 00:17:39.240 [2024-10-16 09:32:03.523416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.240 [2024-10-16 09:32:03.523458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:39.240 [2024-10-16 09:32:03.535400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166e9168 00:17:39.240 [2024-10-16 09:32:03.537019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.240 [2024-10-16 09:32:03.537049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:39.240 [2024-10-16 09:32:03.549176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166e88f8 00:17:39.240 [2024-10-16 09:32:03.550877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:6260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.240 [2024-10-16 09:32:03.550920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:39.240 [2024-10-16 09:32:03.562873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166e8088 00:17:39.240 [2024-10-16 09:32:03.564388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.240 [2024-10-16 09:32:03.564431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:39.240 [2024-10-16 09:32:03.576712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166e7818 00:17:39.240 [2024-10-16 09:32:03.578293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.240 [2024-10-16 09:32:03.578336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:39.240 [2024-10-16 09:32:03.591264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166e6fa8 00:17:39.240 [2024-10-16 09:32:03.593071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:12359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.240 [2024-10-16 09:32:03.593115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:39.240 [2024-10-16 09:32:03.605053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166e6738 00:17:39.240 [2024-10-16 09:32:03.606657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.240 [2024-10-16 09:32:03.606701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:39.240 [2024-10-16 09:32:03.618744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166e5ec8 00:17:39.240 [2024-10-16 09:32:03.620202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.240 [2024-10-16 09:32:03.620244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:39.240 [2024-10-16 09:32:03.632388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166e5658 00:17:39.240 [2024-10-16 09:32:03.633923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.240 [2024-10-16 09:32:03.633965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:39.500 [2024-10-16 09:32:03.646727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166e4de8 00:17:39.500 [2024-10-16 09:32:03.648324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.500 [2024-10-16 09:32:03.648370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:39.500 [2024-10-16 09:32:03.660739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166e4578 00:17:39.500 [2024-10-16 09:32:03.662253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.500 [2024-10-16 09:32:03.662296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:39.500 [2024-10-16 09:32:03.674227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166e3d08 00:17:39.500 [2024-10-16 09:32:03.675663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.500 [2024-10-16 09:32:03.675704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:39.500 [2024-10-16 09:32:03.687595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166e3498 00:17:39.500 [2024-10-16 09:32:03.688989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.500 [2024-10-16 09:32:03.689017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:39.500 [2024-10-16 09:32:03.701322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166e2c28 00:17:39.500 [2024-10-16 09:32:03.702791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.500 [2024-10-16 09:32:03.702834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:39.500 [2024-10-16 09:32:03.714863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166e23b8 00:17:39.500 [2024-10-16 09:32:03.716184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.500 [2024-10-16 09:32:03.716226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:39.500 [2024-10-16 09:32:03.728212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166e1b48 00:17:39.500 [2024-10-16 09:32:03.729664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.500 [2024-10-16 09:32:03.729707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:39.500 [2024-10-16 09:32:03.741695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166e12d8 00:17:39.500 [2024-10-16 09:32:03.743038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.501 [2024-10-16 09:32:03.743081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:39.501 [2024-10-16 09:32:03.755103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166e0a68 00:17:39.501 [2024-10-16 09:32:03.756385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.501 [2024-10-16 09:32:03.756428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:39.501 [2024-10-16 09:32:03.768659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166e01f8 00:17:39.501 [2024-10-16 09:32:03.769996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.501 [2024-10-16 09:32:03.770039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:39.501 [2024-10-16 09:32:03.782025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166df988 00:17:39.501 [2024-10-16 09:32:03.783315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.501 [2024-10-16 09:32:03.783357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:39.501 [2024-10-16 09:32:03.795350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166df118 00:17:39.501 [2024-10-16 09:32:03.796652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.501 [2024-10-16 09:32:03.796721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:39.501 [2024-10-16 09:32:03.808773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166de8a8 00:17:39.501 [2024-10-16 09:32:03.810071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.501 [2024-10-16 09:32:03.810114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:39.501 [2024-10-16 09:32:03.822162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166de038 00:17:39.501 [2024-10-16 09:32:03.823362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.501 [2024-10-16 09:32:03.823405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:39.501 [2024-10-16 09:32:03.840892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166de038 00:17:39.501 [2024-10-16 09:32:03.843163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.501 [2024-10-16 09:32:03.843205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:39.501 [2024-10-16 09:32:03.855566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166de8a8 00:17:39.501 [2024-10-16 09:32:03.857889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.501 [2024-10-16 09:32:03.857934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:39.501 [2024-10-16 09:32:03.869088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166df118 00:17:39.501 [2024-10-16 09:32:03.871343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.501 [2024-10-16 09:32:03.871386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:39.501 [2024-10-16 09:32:03.882575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166df988 00:17:39.501 [2024-10-16 09:32:03.884753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.501 [2024-10-16 09:32:03.884797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:39.501 [2024-10-16 09:32:03.895903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166e01f8 00:17:39.501 [2024-10-16 09:32:03.898172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.501 [2024-10-16 09:32:03.898213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:39.760 [2024-10-16 09:32:03.910540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166e0a68 00:17:39.760 [2024-10-16 09:32:03.912629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.761 [2024-10-16 09:32:03.912673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:39.761 [2024-10-16 09:32:03.924116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166e12d8 00:17:39.761 [2024-10-16 09:32:03.926362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:3091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.761 [2024-10-16 09:32:03.926405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:39.761 18218.00 IOPS, 71.16 MiB/s [2024-10-16T09:32:04.165Z] [2024-10-16 09:32:03.939185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166e1b48 00:17:39.761 [2024-10-16 09:32:03.941309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.761 [2024-10-16 09:32:03.941354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:39.761 [2024-10-16 09:32:03.952554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166e23b8 00:17:39.761 [2024-10-16 09:32:03.954714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.761 [2024-10-16 09:32:03.954744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:39.761 [2024-10-16 09:32:03.966393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166e2c28 00:17:39.761 [2024-10-16 09:32:03.968515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.761 [2024-10-16 09:32:03.968567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:39.761 [2024-10-16 09:32:03.979784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166e3498 00:17:39.761 [2024-10-16 09:32:03.981898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:3544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.761 [2024-10-16 09:32:03.981940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:39.761 [2024-10-16 09:32:03.993146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166e3d08 00:17:39.761 [2024-10-16 09:32:03.995288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.761 [2024-10-16 09:32:03.995329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:39.761 [2024-10-16 09:32:04.006619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166e4578 00:17:39.761 [2024-10-16 09:32:04.008647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.761 [2024-10-16 09:32:04.008690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:39.761 [2024-10-16 09:32:04.020355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166e4de8 00:17:39.761 [2024-10-16 09:32:04.022485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.761 [2024-10-16 09:32:04.022528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:39.761 [2024-10-16 09:32:04.033733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166e5658 00:17:39.761 [2024-10-16 09:32:04.035706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:17387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.761 [2024-10-16 09:32:04.035749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:39.761 [2024-10-16 09:32:04.047050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166e5ec8 00:17:39.761 [2024-10-16 09:32:04.049020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.761 [2024-10-16 09:32:04.049061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:39.761 [2024-10-16 09:32:04.060566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166e6738 00:17:39.761 [2024-10-16 09:32:04.062529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.761 [2024-10-16 09:32:04.062597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:39.761 [2024-10-16 09:32:04.074038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166e6fa8 00:17:39.761 [2024-10-16 09:32:04.076012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.761 [2024-10-16 09:32:04.076055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:39.761 [2024-10-16 09:32:04.087505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166e7818 00:17:39.761 [2024-10-16 09:32:04.089590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:17825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.761 [2024-10-16 09:32:04.089645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:39.761 [2024-10-16 09:32:04.100908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166e8088 00:17:39.761 [2024-10-16 09:32:04.102951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:3429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.761 [2024-10-16 09:32:04.102982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:39.761 [2024-10-16 09:32:04.115559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166e88f8 00:17:39.761 [2024-10-16 09:32:04.117572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:14497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.761 [2024-10-16 09:32:04.117630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:39.761 [2024-10-16 09:32:04.129027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166e9168 00:17:39.761 [2024-10-16 09:32:04.131422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:17125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.761 [2024-10-16 09:32:04.131450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:39.761 [2024-10-16 09:32:04.142925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166e99d8 00:17:39.761 [2024-10-16 09:32:04.144810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.761 [2024-10-16 09:32:04.144842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:39.761 [2024-10-16 09:32:04.156532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166ea248 00:17:39.761 [2024-10-16 09:32:04.158427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:8059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.761 [2024-10-16 09:32:04.158459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:40.021 [2024-10-16 09:32:04.171812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166eaab8 00:17:40.021 [2024-10-16 09:32:04.174170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.021 [2024-10-16 09:32:04.174367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:40.021 [2024-10-16 09:32:04.188331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166eb328 00:17:40.021 [2024-10-16 09:32:04.190492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.021 [2024-10-16 09:32:04.190524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:40.021 [2024-10-16 09:32:04.204401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166ebb98 00:17:40.021 [2024-10-16 09:32:04.206473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.021 [2024-10-16 09:32:04.206508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:40.021 [2024-10-16 09:32:04.220148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166ec408 00:17:40.021 [2024-10-16 09:32:04.222238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:14202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.021 [2024-10-16 09:32:04.222271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:40.021 [2024-10-16 09:32:04.235131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166ecc78 00:17:40.021 [2024-10-16 09:32:04.237044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.021 [2024-10-16 09:32:04.237077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:40.021 [2024-10-16 09:32:04.249828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166ed4e8 00:17:40.021 [2024-10-16 09:32:04.251663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.021 [2024-10-16 09:32:04.251697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:40.021 [2024-10-16 09:32:04.264061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166edd58 00:17:40.021 [2024-10-16 09:32:04.265959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.021 [2024-10-16 09:32:04.265992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:40.021 [2024-10-16 09:32:04.278593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166ee5c8 00:17:40.021 [2024-10-16 09:32:04.280345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.021 [2024-10-16 09:32:04.280378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:40.021 [2024-10-16 09:32:04.292596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166eee38 00:17:40.021 [2024-10-16 09:32:04.294503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.021 [2024-10-16 09:32:04.294536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:40.021 [2024-10-16 09:32:04.306905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166ef6a8 00:17:40.021 [2024-10-16 09:32:04.308819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.021 [2024-10-16 09:32:04.308851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:40.021 [2024-10-16 09:32:04.321377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166eff18 00:17:40.021 [2024-10-16 09:32:04.323222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.021 [2024-10-16 09:32:04.323254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:40.021 [2024-10-16 09:32:04.335513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166f0788 00:17:40.021 [2024-10-16 09:32:04.337323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.021 [2024-10-16 09:32:04.337359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:40.021 [2024-10-16 09:32:04.349758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166f0ff8 00:17:40.021 [2024-10-16 09:32:04.351491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:25518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.021 [2024-10-16 09:32:04.351522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:40.021 [2024-10-16 09:32:04.364279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166f1868 00:17:40.021 [2024-10-16 09:32:04.366243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.021 [2024-10-16 09:32:04.366277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:40.021 [2024-10-16 09:32:04.379212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166f20d8 00:17:40.021 [2024-10-16 09:32:04.380880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.021 [2024-10-16 09:32:04.380928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:40.022 [2024-10-16 09:32:04.393010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166f2948 00:17:40.022 [2024-10-16 09:32:04.395027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.022 [2024-10-16 09:32:04.395075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:40.022 [2024-10-16 09:32:04.407014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166f31b8 00:17:40.022 [2024-10-16 09:32:04.408563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.022 [2024-10-16 09:32:04.408620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:40.022 [2024-10-16 09:32:04.420533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166f3a28 00:17:40.022 [2024-10-16 09:32:04.422286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:18946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.022 [2024-10-16 09:32:04.422333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:40.281 [2024-10-16 09:32:04.435323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166f4298 00:17:40.281 [2024-10-16 09:32:04.436975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.281 [2024-10-16 09:32:04.437007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:40.281 [2024-10-16 09:32:04.449125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166f4b08 00:17:40.282 [2024-10-16 09:32:04.450789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.282 [2024-10-16 09:32:04.450832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:40.282 [2024-10-16 09:32:04.462821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166f5378 00:17:40.282 [2024-10-16 09:32:04.464552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.282 [2024-10-16 09:32:04.464605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:40.282 [2024-10-16 09:32:04.476613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166f5be8 00:17:40.282 [2024-10-16 09:32:04.478263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.282 [2024-10-16 09:32:04.478294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:40.282 [2024-10-16 09:32:04.490247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166f6458 00:17:40.282 [2024-10-16 09:32:04.491770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.282 [2024-10-16 09:32:04.491801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:40.282 [2024-10-16 09:32:04.503680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166f6cc8 00:17:40.282 [2024-10-16 09:32:04.505426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:18615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.282 [2024-10-16 09:32:04.505461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:40.282 [2024-10-16 09:32:04.517869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166f7538 00:17:40.282 [2024-10-16 09:32:04.519297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.282 [2024-10-16 09:32:04.519327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:40.282 [2024-10-16 09:32:04.531256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166f7da8 00:17:40.282 [2024-10-16 09:32:04.532708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.282 [2024-10-16 09:32:04.532739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:40.282 [2024-10-16 09:32:04.544563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166f8618 00:17:40.282 [2024-10-16 09:32:04.546034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.282 [2024-10-16 09:32:04.546064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:40.282 [2024-10-16 09:32:04.558030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166f8e88 00:17:40.282 [2024-10-16 09:32:04.559405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:15886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.282 [2024-10-16 09:32:04.559437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:40.282 [2024-10-16 09:32:04.571360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166f96f8 00:17:40.282 [2024-10-16 09:32:04.572952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.282 [2024-10-16 09:32:04.572984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:40.282 [2024-10-16 09:32:04.584883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166f9f68 00:17:40.282 [2024-10-16 09:32:04.586320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.282 [2024-10-16 09:32:04.586352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:40.282 [2024-10-16 09:32:04.598673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166fa7d8 00:17:40.282 [2024-10-16 09:32:04.600008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.282 [2024-10-16 09:32:04.600038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:40.282 [2024-10-16 09:32:04.611980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166fb048 00:17:40.282 [2024-10-16 09:32:04.613737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.282 [2024-10-16 09:32:04.613766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:40.282 [2024-10-16 09:32:04.625800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166fb8b8 00:17:40.282 [2024-10-16 09:32:04.627110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:14032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.282 [2024-10-16 09:32:04.627140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:40.282 [2024-10-16 09:32:04.639080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166fc128 00:17:40.282 [2024-10-16 09:32:04.640358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.282 [2024-10-16 09:32:04.640391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:40.282 [2024-10-16 09:32:04.652768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166fc998 00:17:40.282 [2024-10-16 09:32:04.654379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.282 [2024-10-16 09:32:04.654412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:40.282 [2024-10-16 09:32:04.666526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166fd208 00:17:40.282 [2024-10-16 09:32:04.667874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.282 [2024-10-16 09:32:04.667906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:40.282 [2024-10-16 09:32:04.680923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166fda78 00:17:40.282 [2024-10-16 09:32:04.682661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.282 [2024-10-16 09:32:04.682718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:40.542 [2024-10-16 09:32:04.695778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166fe2e8 00:17:40.542 [2024-10-16 09:32:04.697304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.542 [2024-10-16 09:32:04.697340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:40.542 [2024-10-16 09:32:04.709458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166feb58 00:17:40.542 [2024-10-16 09:32:04.710768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.542 [2024-10-16 09:32:04.710801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:40.542 [2024-10-16 09:32:04.728285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166fef90 00:17:40.542 [2024-10-16 09:32:04.730700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.542 [2024-10-16 09:32:04.730730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.542 [2024-10-16 09:32:04.741888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166feb58 00:17:40.542 [2024-10-16 09:32:04.744081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:24876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.542 [2024-10-16 09:32:04.744111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:40.542 [2024-10-16 09:32:04.755297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166fe2e8 00:17:40.542 [2024-10-16 09:32:04.757610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.542 [2024-10-16 09:32:04.757654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:40.542 [2024-10-16 09:32:04.768710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166fda78 00:17:40.542 [2024-10-16 09:32:04.770984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.542 [2024-10-16 09:32:04.771018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:40.542 [2024-10-16 09:32:04.782436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166fd208 00:17:40.542 [2024-10-16 09:32:04.784695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.542 [2024-10-16 09:32:04.784726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:40.542 [2024-10-16 09:32:04.795836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166fc998 00:17:40.542 [2024-10-16 09:32:04.798140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:17819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.542 [2024-10-16 09:32:04.798170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:40.542 [2024-10-16 09:32:04.809387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166fc128 00:17:40.542 [2024-10-16 09:32:04.811831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.542 [2024-10-16 09:32:04.811861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:40.542 [2024-10-16 09:32:04.823055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166fb8b8 00:17:40.542 [2024-10-16 09:32:04.825162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.542 [2024-10-16 09:32:04.825391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:40.542 [2024-10-16 09:32:04.836895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166fb048 00:17:40.542 [2024-10-16 09:32:04.839092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.542 [2024-10-16 09:32:04.839124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:40.542 [2024-10-16 09:32:04.850352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166fa7d8 00:17:40.542 [2024-10-16 09:32:04.852557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.542 [2024-10-16 09:32:04.852596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:40.542 [2024-10-16 09:32:04.863867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166f9f68 00:17:40.542 [2024-10-16 09:32:04.866048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.542 [2024-10-16 09:32:04.866078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:40.542 [2024-10-16 09:32:04.877287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166f96f8 00:17:40.542 [2024-10-16 09:32:04.879662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.542 [2024-10-16 09:32:04.879693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:40.542 [2024-10-16 09:32:04.891126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166f8e88 00:17:40.542 [2024-10-16 09:32:04.893204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.542 [2024-10-16 09:32:04.893382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:40.542 [2024-10-16 09:32:04.904841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166f8618 00:17:40.542 [2024-10-16 09:32:04.906991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.542 [2024-10-16 09:32:04.907025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:40.542 [2024-10-16 09:32:04.918281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166f7da8 00:17:40.542 [2024-10-16 09:32:04.920510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.542 [2024-10-16 09:32:04.920563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:40.542 18217.50 IOPS, 71.16 MiB/s [2024-10-16T09:32:04.946Z] [2024-10-16 09:32:04.933769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf1230) with pdu=0x2000166f7538 00:17:40.542 [2024-10-16 09:32:04.936018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:40.542 [2024-10-16 09:32:04.936050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:40.542 00:17:40.542 Latency(us) 00:17:40.542 [2024-10-16T09:32:04.946Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:40.542 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:40.542 nvme0n1 : 2.01 18189.57 71.05 0.00 0.00 7030.26 3723.64 26214.40 00:17:40.542 [2024-10-16T09:32:04.946Z] =================================================================================================================== 00:17:40.542 [2024-10-16T09:32:04.946Z] Total : 18189.57 71.05 0.00 0.00 7030.26 3723.64 26214.40 00:17:40.542 { 00:17:40.542 "results": [ 00:17:40.542 { 00:17:40.542 "job": "nvme0n1", 00:17:40.542 "core_mask": "0x2", 00:17:40.542 "workload": "randwrite", 00:17:40.542 "status": "finished", 00:17:40.542 "queue_depth": 128, 00:17:40.542 "io_size": 4096, 00:17:40.542 "runtime": 2.010108, 00:17:40.542 "iops": 18189.56991365638, 00:17:40.542 "mibps": 71.05300747522024, 00:17:40.542 "io_failed": 0, 00:17:40.542 "io_timeout": 0, 00:17:40.542 "avg_latency_us": 7030.260691757439, 00:17:40.542 "min_latency_us": 3723.6363636363635, 00:17:40.542 "max_latency_us": 26214.4 00:17:40.542 } 00:17:40.542 ], 00:17:40.542 "core_count": 1 00:17:40.542 } 00:17:40.801 09:32:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:40.801 09:32:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:40.801 09:32:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:40.801 09:32:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:40.801 | .driver_specific 00:17:40.801 | .nvme_error 00:17:40.801 | .status_code 00:17:40.801 | .command_transient_transport_error' 00:17:41.061 09:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 143 > 0 )) 00:17:41.061 09:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79751 00:17:41.061 09:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 79751 ']' 00:17:41.061 09:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 79751 00:17:41.061 09:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:17:41.061 09:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:41.061 09:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79751 00:17:41.061 killing process with pid 79751 00:17:41.061 Received shutdown signal, test time was about 2.000000 seconds 00:17:41.061 00:17:41.061 Latency(us) 00:17:41.061 [2024-10-16T09:32:05.465Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:41.061 [2024-10-16T09:32:05.465Z] =================================================================================================================== 00:17:41.061 [2024-10-16T09:32:05.465Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:41.061 09:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:41.061 09:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:41.061 09:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79751' 00:17:41.061 09:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 79751 00:17:41.061 09:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 79751 00:17:41.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:41.320 09:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:17:41.320 09:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:41.320 09:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:17:41.320 09:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:17:41.320 09:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:17:41.320 09:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79804 00:17:41.320 09:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:17:41.320 09:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79804 /var/tmp/bperf.sock 00:17:41.320 09:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 79804 ']' 00:17:41.320 09:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:41.320 09:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:41.320 09:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:41.320 09:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:41.320 09:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:41.320 [2024-10-16 09:32:05.522723] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:17:41.320 [2024-10-16 09:32:05.522972] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6I/O size of 131072 is greater than zero copy threshold (65536). 00:17:41.320 Zero copy mechanism will not be used. 00:17:41.320 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79804 ] 00:17:41.320 [2024-10-16 09:32:05.656590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.320 [2024-10-16 09:32:05.700658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:41.580 [2024-10-16 09:32:05.754357] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:42.149 09:32:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:42.149 09:32:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:17:42.149 09:32:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:42.149 09:32:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:42.408 09:32:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:42.408 09:32:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.408 09:32:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:42.408 09:32:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.408 09:32:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:42.408 09:32:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:42.666 nvme0n1 00:17:42.926 09:32:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:17:42.926 09:32:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.926 09:32:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:42.926 09:32:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.926 09:32:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:42.926 09:32:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:42.926 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:42.926 Zero copy mechanism will not be used. 00:17:42.926 Running I/O for 2 seconds... 00:17:42.926 [2024-10-16 09:32:07.187096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:42.926 [2024-10-16 09:32:07.187374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.926 [2024-10-16 09:32:07.187401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.926 [2024-10-16 09:32:07.191662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:42.926 [2024-10-16 09:32:07.191919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.926 [2024-10-16 09:32:07.191946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.926 [2024-10-16 09:32:07.196205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:42.926 [2024-10-16 09:32:07.196470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.926 [2024-10-16 09:32:07.196498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.926 [2024-10-16 09:32:07.200734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:42.926 [2024-10-16 09:32:07.201001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.926 [2024-10-16 09:32:07.201027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.926 [2024-10-16 09:32:07.205527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:42.926 [2024-10-16 09:32:07.205833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.926 [2024-10-16 09:32:07.205860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.926 [2024-10-16 09:32:07.210061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:42.926 [2024-10-16 09:32:07.210330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.926 [2024-10-16 09:32:07.210356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.926 [2024-10-16 09:32:07.214767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:42.926 [2024-10-16 09:32:07.215041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.926 [2024-10-16 09:32:07.215067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.926 [2024-10-16 09:32:07.219361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:42.926 [2024-10-16 09:32:07.219656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.926 [2024-10-16 09:32:07.219703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.926 [2024-10-16 09:32:07.223997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:42.926 [2024-10-16 09:32:07.224262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.926 [2024-10-16 09:32:07.224289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.926 [2024-10-16 09:32:07.228510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:42.926 [2024-10-16 09:32:07.228946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.926 [2024-10-16 09:32:07.228968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.926 [2024-10-16 09:32:07.233215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:42.926 [2024-10-16 09:32:07.233471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.926 [2024-10-16 09:32:07.233498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.926 [2024-10-16 09:32:07.237873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:42.926 [2024-10-16 09:32:07.238204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.926 [2024-10-16 09:32:07.238264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.926 [2024-10-16 09:32:07.243042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:42.926 [2024-10-16 09:32:07.243341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.926 [2024-10-16 09:32:07.243400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.926 [2024-10-16 09:32:07.248293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:42.926 [2024-10-16 09:32:07.248807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.926 [2024-10-16 09:32:07.248833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.926 [2024-10-16 09:32:07.253992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:42.926 [2024-10-16 09:32:07.254326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.926 [2024-10-16 09:32:07.254356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.926 [2024-10-16 09:32:07.259407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:42.926 [2024-10-16 09:32:07.259735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.926 [2024-10-16 09:32:07.259768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.926 [2024-10-16 09:32:07.264489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:42.926 [2024-10-16 09:32:07.265031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.926 [2024-10-16 09:32:07.265054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.926 [2024-10-16 09:32:07.269880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:42.926 [2024-10-16 09:32:07.270180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.926 [2024-10-16 09:32:07.270205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.926 [2024-10-16 09:32:07.274999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:42.926 [2024-10-16 09:32:07.275263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.926 [2024-10-16 09:32:07.275290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.926 [2024-10-16 09:32:07.279916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:42.926 [2024-10-16 09:32:07.280182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.926 [2024-10-16 09:32:07.280210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.926 [2024-10-16 09:32:07.284678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:42.926 [2024-10-16 09:32:07.284949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.926 [2024-10-16 09:32:07.284975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.926 [2024-10-16 09:32:07.289239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:42.926 [2024-10-16 09:32:07.289537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.926 [2024-10-16 09:32:07.289571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.926 [2024-10-16 09:32:07.293894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:42.926 [2024-10-16 09:32:07.294146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.926 [2024-10-16 09:32:07.294171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.926 [2024-10-16 09:32:07.298594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:42.926 [2024-10-16 09:32:07.298860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.926 [2024-10-16 09:32:07.298884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.926 [2024-10-16 09:32:07.303192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:42.926 [2024-10-16 09:32:07.303629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.926 [2024-10-16 09:32:07.303651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.927 [2024-10-16 09:32:07.307892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:42.927 [2024-10-16 09:32:07.308160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.927 [2024-10-16 09:32:07.308186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.927 [2024-10-16 09:32:07.312632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:42.927 [2024-10-16 09:32:07.312906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.927 [2024-10-16 09:32:07.312932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.927 [2024-10-16 09:32:07.317252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:42.927 [2024-10-16 09:32:07.317531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.927 [2024-10-16 09:32:07.317566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.927 [2024-10-16 09:32:07.321822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:42.927 [2024-10-16 09:32:07.322085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.927 [2024-10-16 09:32:07.322111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.927 [2024-10-16 09:32:07.326588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:42.927 [2024-10-16 09:32:07.326923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.927 [2024-10-16 09:32:07.326950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.187 [2024-10-16 09:32:07.331767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.187 [2024-10-16 09:32:07.332035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.187 [2024-10-16 09:32:07.332077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.187 [2024-10-16 09:32:07.336855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.187 [2024-10-16 09:32:07.337105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.187 [2024-10-16 09:32:07.337131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.187 [2024-10-16 09:32:07.341574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.187 [2024-10-16 09:32:07.341834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.187 [2024-10-16 09:32:07.341860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.187 [2024-10-16 09:32:07.346080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.187 [2024-10-16 09:32:07.346346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.187 [2024-10-16 09:32:07.346372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.187 [2024-10-16 09:32:07.350712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.187 [2024-10-16 09:32:07.350976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.187 [2024-10-16 09:32:07.351002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.187 [2024-10-16 09:32:07.355345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.187 [2024-10-16 09:32:07.355803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.187 [2024-10-16 09:32:07.355826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.187 [2024-10-16 09:32:07.359995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.187 [2024-10-16 09:32:07.360262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.187 [2024-10-16 09:32:07.360288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.187 [2024-10-16 09:32:07.364492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.187 [2024-10-16 09:32:07.364779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.187 [2024-10-16 09:32:07.364805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.187 [2024-10-16 09:32:07.369128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.187 [2024-10-16 09:32:07.369428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.187 [2024-10-16 09:32:07.369455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.187 [2024-10-16 09:32:07.373728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.187 [2024-10-16 09:32:07.373990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.187 [2024-10-16 09:32:07.374015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.187 [2024-10-16 09:32:07.378302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.187 [2024-10-16 09:32:07.378566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.187 [2024-10-16 09:32:07.378603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.187 [2024-10-16 09:32:07.382874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.187 [2024-10-16 09:32:07.383142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.187 [2024-10-16 09:32:07.383168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.187 [2024-10-16 09:32:07.387533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.187 [2024-10-16 09:32:07.387809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.187 [2024-10-16 09:32:07.387834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.187 [2024-10-16 09:32:07.392011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.187 [2024-10-16 09:32:07.392275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.187 [2024-10-16 09:32:07.392302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.187 [2024-10-16 09:32:07.396618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.187 [2024-10-16 09:32:07.396882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.187 [2024-10-16 09:32:07.396908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.187 [2024-10-16 09:32:07.401246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.187 [2024-10-16 09:32:07.401549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.187 [2024-10-16 09:32:07.401588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.187 [2024-10-16 09:32:07.405874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.187 [2024-10-16 09:32:07.406161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.187 [2024-10-16 09:32:07.406187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.187 [2024-10-16 09:32:07.410546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.187 [2024-10-16 09:32:07.410822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.187 [2024-10-16 09:32:07.410847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.187 [2024-10-16 09:32:07.415174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.187 [2024-10-16 09:32:07.415438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.187 [2024-10-16 09:32:07.415463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.187 [2024-10-16 09:32:07.419831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.187 [2024-10-16 09:32:07.420094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.187 [2024-10-16 09:32:07.420119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.188 [2024-10-16 09:32:07.424346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.188 [2024-10-16 09:32:07.424626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.188 [2024-10-16 09:32:07.424651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.188 [2024-10-16 09:32:07.428937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.188 [2024-10-16 09:32:07.429378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.188 [2024-10-16 09:32:07.429401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.188 [2024-10-16 09:32:07.433758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.188 [2024-10-16 09:32:07.434026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.188 [2024-10-16 09:32:07.434052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.188 [2024-10-16 09:32:07.438296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.188 [2024-10-16 09:32:07.438560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.188 [2024-10-16 09:32:07.438609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.188 [2024-10-16 09:32:07.442906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.188 [2024-10-16 09:32:07.443169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.188 [2024-10-16 09:32:07.443195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.188 [2024-10-16 09:32:07.447340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.188 [2024-10-16 09:32:07.447631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.188 [2024-10-16 09:32:07.447658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.188 [2024-10-16 09:32:07.451933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.188 [2024-10-16 09:32:07.452198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.188 [2024-10-16 09:32:07.452224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.188 [2024-10-16 09:32:07.456627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.188 [2024-10-16 09:32:07.456891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.188 [2024-10-16 09:32:07.456916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.188 [2024-10-16 09:32:07.461033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.188 [2024-10-16 09:32:07.461324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.188 [2024-10-16 09:32:07.461350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.188 [2024-10-16 09:32:07.465669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.188 [2024-10-16 09:32:07.465934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.188 [2024-10-16 09:32:07.465959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.188 [2024-10-16 09:32:07.470254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.188 [2024-10-16 09:32:07.470519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.188 [2024-10-16 09:32:07.470552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.188 [2024-10-16 09:32:07.474828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.188 [2024-10-16 09:32:07.475093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.188 [2024-10-16 09:32:07.475118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.188 [2024-10-16 09:32:07.479465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.188 [2024-10-16 09:32:07.479760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.188 [2024-10-16 09:32:07.479786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.188 [2024-10-16 09:32:07.484434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.188 [2024-10-16 09:32:07.484898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.188 [2024-10-16 09:32:07.484973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.188 [2024-10-16 09:32:07.489743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.188 [2024-10-16 09:32:07.490010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.188 [2024-10-16 09:32:07.490052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.188 [2024-10-16 09:32:07.494229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.188 [2024-10-16 09:32:07.494495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.188 [2024-10-16 09:32:07.494521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.188 [2024-10-16 09:32:07.498947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.188 [2024-10-16 09:32:07.499217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.188 [2024-10-16 09:32:07.499243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.188 [2024-10-16 09:32:07.503596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.188 [2024-10-16 09:32:07.503863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.188 [2024-10-16 09:32:07.503888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.188 [2024-10-16 09:32:07.508276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.188 [2024-10-16 09:32:07.508696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.188 [2024-10-16 09:32:07.508718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.188 [2024-10-16 09:32:07.512912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.188 [2024-10-16 09:32:07.513218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.188 [2024-10-16 09:32:07.513244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.188 [2024-10-16 09:32:07.517591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.188 [2024-10-16 09:32:07.517906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.188 [2024-10-16 09:32:07.517937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.188 [2024-10-16 09:32:07.522194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.188 [2024-10-16 09:32:07.522461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.188 [2024-10-16 09:32:07.522486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.188 [2024-10-16 09:32:07.526848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.188 [2024-10-16 09:32:07.527112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.188 [2024-10-16 09:32:07.527138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.188 [2024-10-16 09:32:07.531447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.188 [2024-10-16 09:32:07.531727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.188 [2024-10-16 09:32:07.531752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.188 [2024-10-16 09:32:07.535889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.188 [2024-10-16 09:32:07.536155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.188 [2024-10-16 09:32:07.536181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.188 [2024-10-16 09:32:07.540405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.188 [2024-10-16 09:32:07.540715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.188 [2024-10-16 09:32:07.540745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.188 [2024-10-16 09:32:07.544941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.188 [2024-10-16 09:32:07.545230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.188 [2024-10-16 09:32:07.545256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.188 [2024-10-16 09:32:07.549650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.188 [2024-10-16 09:32:07.549900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.188 [2024-10-16 09:32:07.549924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.188 [2024-10-16 09:32:07.554314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.188 [2024-10-16 09:32:07.554596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.189 [2024-10-16 09:32:07.554622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.189 [2024-10-16 09:32:07.558861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.189 [2024-10-16 09:32:07.559125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.189 [2024-10-16 09:32:07.559150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.189 [2024-10-16 09:32:07.563385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.189 [2024-10-16 09:32:07.563841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.189 [2024-10-16 09:32:07.563863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.189 [2024-10-16 09:32:07.568185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.189 [2024-10-16 09:32:07.568450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.189 [2024-10-16 09:32:07.568476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.189 [2024-10-16 09:32:07.572649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.189 [2024-10-16 09:32:07.572912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.189 [2024-10-16 09:32:07.572937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.189 [2024-10-16 09:32:07.577138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.189 [2024-10-16 09:32:07.577446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.189 [2024-10-16 09:32:07.577472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.189 [2024-10-16 09:32:07.581860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.189 [2024-10-16 09:32:07.582122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.189 [2024-10-16 09:32:07.582148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.189 [2024-10-16 09:32:07.586498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.189 [2024-10-16 09:32:07.586881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.189 [2024-10-16 09:32:07.586930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.449 [2024-10-16 09:32:07.591647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.450 [2024-10-16 09:32:07.591955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.450 [2024-10-16 09:32:07.591982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.450 [2024-10-16 09:32:07.596400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.450 [2024-10-16 09:32:07.596747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.450 [2024-10-16 09:32:07.596934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.450 [2024-10-16 09:32:07.601609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.450 [2024-10-16 09:32:07.601888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.450 [2024-10-16 09:32:07.601914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.450 [2024-10-16 09:32:07.607175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.450 [2024-10-16 09:32:07.607721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.450 [2024-10-16 09:32:07.607747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.450 [2024-10-16 09:32:07.613309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.450 [2024-10-16 09:32:07.613626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.450 [2024-10-16 09:32:07.613656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.450 [2024-10-16 09:32:07.618329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.450 [2024-10-16 09:32:07.618656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.450 [2024-10-16 09:32:07.618684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.450 [2024-10-16 09:32:07.623227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.450 [2024-10-16 09:32:07.623661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.450 [2024-10-16 09:32:07.623684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.450 [2024-10-16 09:32:07.628027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.450 [2024-10-16 09:32:07.628302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.450 [2024-10-16 09:32:07.628328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.450 [2024-10-16 09:32:07.632781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.450 [2024-10-16 09:32:07.633058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.450 [2024-10-16 09:32:07.633085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.450 [2024-10-16 09:32:07.637601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.450 [2024-10-16 09:32:07.637896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.450 [2024-10-16 09:32:07.637923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.450 [2024-10-16 09:32:07.642243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.450 [2024-10-16 09:32:07.642512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.450 [2024-10-16 09:32:07.642549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.450 [2024-10-16 09:32:07.647002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.450 [2024-10-16 09:32:07.647274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.450 [2024-10-16 09:32:07.647300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.450 [2024-10-16 09:32:07.651625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.450 [2024-10-16 09:32:07.651887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.450 [2024-10-16 09:32:07.651912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.450 [2024-10-16 09:32:07.656182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.450 [2024-10-16 09:32:07.656457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.450 [2024-10-16 09:32:07.656482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.450 [2024-10-16 09:32:07.660868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.450 [2024-10-16 09:32:07.661142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.450 [2024-10-16 09:32:07.661192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.450 [2024-10-16 09:32:07.665467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.450 [2024-10-16 09:32:07.665846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.450 [2024-10-16 09:32:07.665893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.450 [2024-10-16 09:32:07.670118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.450 [2024-10-16 09:32:07.670386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.450 [2024-10-16 09:32:07.670411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.450 [2024-10-16 09:32:07.674677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.450 [2024-10-16 09:32:07.674941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.450 [2024-10-16 09:32:07.674966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.450 [2024-10-16 09:32:07.679266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.450 [2024-10-16 09:32:07.679704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.450 [2024-10-16 09:32:07.679727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.450 [2024-10-16 09:32:07.683924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.450 [2024-10-16 09:32:07.684189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.450 [2024-10-16 09:32:07.684214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.450 [2024-10-16 09:32:07.688424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.450 [2024-10-16 09:32:07.688738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.450 [2024-10-16 09:32:07.688760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.450 [2024-10-16 09:32:07.693118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.450 [2024-10-16 09:32:07.693518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.450 [2024-10-16 09:32:07.693571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.450 [2024-10-16 09:32:07.698057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.450 [2024-10-16 09:32:07.698323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.450 [2024-10-16 09:32:07.698349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.450 [2024-10-16 09:32:07.702745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.450 [2024-10-16 09:32:07.703016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.450 [2024-10-16 09:32:07.703042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.450 [2024-10-16 09:32:07.707289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.450 [2024-10-16 09:32:07.707728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.450 [2024-10-16 09:32:07.707751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.450 [2024-10-16 09:32:07.712041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.450 [2024-10-16 09:32:07.712298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.450 [2024-10-16 09:32:07.712324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.450 [2024-10-16 09:32:07.716798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.451 [2024-10-16 09:32:07.717055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.451 [2024-10-16 09:32:07.717081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.451 [2024-10-16 09:32:07.721582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.451 [2024-10-16 09:32:07.721915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.451 [2024-10-16 09:32:07.721946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.451 [2024-10-16 09:32:07.726263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.451 [2024-10-16 09:32:07.726514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.451 [2024-10-16 09:32:07.726565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.451 [2024-10-16 09:32:07.730895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.451 [2024-10-16 09:32:07.731158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.451 [2024-10-16 09:32:07.731184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.451 [2024-10-16 09:32:07.735481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.451 [2024-10-16 09:32:07.735797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.451 [2024-10-16 09:32:07.735828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.451 [2024-10-16 09:32:07.740099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.451 [2024-10-16 09:32:07.740397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.451 [2024-10-16 09:32:07.740424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.451 [2024-10-16 09:32:07.745218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.451 [2024-10-16 09:32:07.745511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.451 [2024-10-16 09:32:07.745549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.451 [2024-10-16 09:32:07.750421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.451 [2024-10-16 09:32:07.750723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.451 [2024-10-16 09:32:07.750750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.451 [2024-10-16 09:32:07.755304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.451 [2024-10-16 09:32:07.755626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.451 [2024-10-16 09:32:07.755651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.451 [2024-10-16 09:32:07.760584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.451 [2024-10-16 09:32:07.760925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.451 [2024-10-16 09:32:07.760973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.451 [2024-10-16 09:32:07.765763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.451 [2024-10-16 09:32:07.766088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.451 [2024-10-16 09:32:07.766112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.451 [2024-10-16 09:32:07.771007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.451 [2024-10-16 09:32:07.771281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.451 [2024-10-16 09:32:07.771307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.451 [2024-10-16 09:32:07.776051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.451 [2024-10-16 09:32:07.776322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.451 [2024-10-16 09:32:07.776348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.451 [2024-10-16 09:32:07.781005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.451 [2024-10-16 09:32:07.781443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.451 [2024-10-16 09:32:07.781467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.451 [2024-10-16 09:32:07.786149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.451 [2024-10-16 09:32:07.786420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.451 [2024-10-16 09:32:07.786446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.451 [2024-10-16 09:32:07.791035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.451 [2024-10-16 09:32:07.791309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.451 [2024-10-16 09:32:07.791335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.451 [2024-10-16 09:32:07.795807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.451 [2024-10-16 09:32:07.796079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.451 [2024-10-16 09:32:07.796105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.451 [2024-10-16 09:32:07.800541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.451 [2024-10-16 09:32:07.800823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.451 [2024-10-16 09:32:07.800850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.451 [2024-10-16 09:32:07.805463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.451 [2024-10-16 09:32:07.805797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.451 [2024-10-16 09:32:07.805823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.451 [2024-10-16 09:32:07.810231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.451 [2024-10-16 09:32:07.810504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.451 [2024-10-16 09:32:07.810530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.451 [2024-10-16 09:32:07.814955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.451 [2024-10-16 09:32:07.815228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.451 [2024-10-16 09:32:07.815271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.451 [2024-10-16 09:32:07.819765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.451 [2024-10-16 09:32:07.820125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.451 [2024-10-16 09:32:07.820287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.451 [2024-10-16 09:32:07.824772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.451 [2024-10-16 09:32:07.825039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.451 [2024-10-16 09:32:07.825066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.451 [2024-10-16 09:32:07.829497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.451 [2024-10-16 09:32:07.829828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.451 [2024-10-16 09:32:07.829854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.451 [2024-10-16 09:32:07.834259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.451 [2024-10-16 09:32:07.834706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.451 [2024-10-16 09:32:07.834729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.451 [2024-10-16 09:32:07.839249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.451 [2024-10-16 09:32:07.839505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.451 [2024-10-16 09:32:07.839531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.451 [2024-10-16 09:32:07.844019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.451 [2024-10-16 09:32:07.844285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.451 [2024-10-16 09:32:07.844312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.451 [2024-10-16 09:32:07.848686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.451 [2024-10-16 09:32:07.849003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.451 [2024-10-16 09:32:07.849030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.718 [2024-10-16 09:32:07.853957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.718 [2024-10-16 09:32:07.854252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.718 [2024-10-16 09:32:07.854295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.718 [2024-10-16 09:32:07.858881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.718 [2024-10-16 09:32:07.859228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.718 [2024-10-16 09:32:07.859250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.718 [2024-10-16 09:32:07.863793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.718 [2024-10-16 09:32:07.864058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.718 [2024-10-16 09:32:07.864084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.718 [2024-10-16 09:32:07.868725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.718 [2024-10-16 09:32:07.868992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.718 [2024-10-16 09:32:07.869017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.718 [2024-10-16 09:32:07.873566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.718 [2024-10-16 09:32:07.873858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.718 [2024-10-16 09:32:07.873883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.718 [2024-10-16 09:32:07.878308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.718 [2024-10-16 09:32:07.878583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.718 [2024-10-16 09:32:07.878609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.718 [2024-10-16 09:32:07.883092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.718 [2024-10-16 09:32:07.883363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.718 [2024-10-16 09:32:07.883400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.718 [2024-10-16 09:32:07.887862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.718 [2024-10-16 09:32:07.888149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.718 [2024-10-16 09:32:07.888171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.718 [2024-10-16 09:32:07.892500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.718 [2024-10-16 09:32:07.892792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.718 [2024-10-16 09:32:07.892818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.718 [2024-10-16 09:32:07.897240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.718 [2024-10-16 09:32:07.897507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.718 [2024-10-16 09:32:07.897534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.718 [2024-10-16 09:32:07.902138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.718 [2024-10-16 09:32:07.902410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.718 [2024-10-16 09:32:07.902436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.718 [2024-10-16 09:32:07.906833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.718 [2024-10-16 09:32:07.907102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.718 [2024-10-16 09:32:07.907128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.718 [2024-10-16 09:32:07.911482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.718 [2024-10-16 09:32:07.911762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.718 [2024-10-16 09:32:07.911788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.718 [2024-10-16 09:32:07.916429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.718 [2024-10-16 09:32:07.916788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.718 [2024-10-16 09:32:07.916821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.718 [2024-10-16 09:32:07.921227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.718 [2024-10-16 09:32:07.921597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.718 [2024-10-16 09:32:07.921671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.718 [2024-10-16 09:32:07.926092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.718 [2024-10-16 09:32:07.926424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.718 [2024-10-16 09:32:07.926457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.718 [2024-10-16 09:32:07.931091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.718 [2024-10-16 09:32:07.931423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.718 [2024-10-16 09:32:07.931455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.718 [2024-10-16 09:32:07.935944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.718 [2024-10-16 09:32:07.936278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.718 [2024-10-16 09:32:07.936310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.718 [2024-10-16 09:32:07.940688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.718 [2024-10-16 09:32:07.941041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.718 [2024-10-16 09:32:07.941084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.718 [2024-10-16 09:32:07.945384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.718 [2024-10-16 09:32:07.945803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.719 [2024-10-16 09:32:07.945841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.719 [2024-10-16 09:32:07.950108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.719 [2024-10-16 09:32:07.950444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.719 [2024-10-16 09:32:07.950477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.719 [2024-10-16 09:32:07.954821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.719 [2024-10-16 09:32:07.955151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.719 [2024-10-16 09:32:07.955189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.719 [2024-10-16 09:32:07.959495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.719 [2024-10-16 09:32:07.959837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.719 [2024-10-16 09:32:07.959879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.719 [2024-10-16 09:32:07.964162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.719 [2024-10-16 09:32:07.964486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.719 [2024-10-16 09:32:07.964529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.719 [2024-10-16 09:32:07.968959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.719 [2024-10-16 09:32:07.969345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.719 [2024-10-16 09:32:07.969381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.719 [2024-10-16 09:32:07.973637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.719 [2024-10-16 09:32:07.973974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.719 [2024-10-16 09:32:07.974006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.719 [2024-10-16 09:32:07.978193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.719 [2024-10-16 09:32:07.978526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.719 [2024-10-16 09:32:07.978567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.719 [2024-10-16 09:32:07.982761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.719 [2024-10-16 09:32:07.983103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.719 [2024-10-16 09:32:07.983140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.719 [2024-10-16 09:32:07.987451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.719 [2024-10-16 09:32:07.987789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.719 [2024-10-16 09:32:07.987823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.719 [2024-10-16 09:32:07.992153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.719 [2024-10-16 09:32:07.992475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.719 [2024-10-16 09:32:07.992511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.719 [2024-10-16 09:32:07.996876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.719 [2024-10-16 09:32:07.997213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.719 [2024-10-16 09:32:07.997251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.719 [2024-10-16 09:32:08.001985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.719 [2024-10-16 09:32:08.002329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.719 [2024-10-16 09:32:08.002361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.719 [2024-10-16 09:32:08.007467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.719 [2024-10-16 09:32:08.007796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.719 [2024-10-16 09:32:08.007840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.719 [2024-10-16 09:32:08.012374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.719 [2024-10-16 09:32:08.012723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.719 [2024-10-16 09:32:08.012759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.719 [2024-10-16 09:32:08.017447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.719 [2024-10-16 09:32:08.017758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.719 [2024-10-16 09:32:08.017792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.719 [2024-10-16 09:32:08.022482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.719 [2024-10-16 09:32:08.022853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.719 [2024-10-16 09:32:08.022891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.719 [2024-10-16 09:32:08.027446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.719 [2024-10-16 09:32:08.027807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.719 [2024-10-16 09:32:08.027846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.719 [2024-10-16 09:32:08.032333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.719 [2024-10-16 09:32:08.032687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.719 [2024-10-16 09:32:08.032720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.719 [2024-10-16 09:32:08.037116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.719 [2024-10-16 09:32:08.037466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.719 [2024-10-16 09:32:08.037520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.719 [2024-10-16 09:32:08.041872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.719 [2024-10-16 09:32:08.042194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.719 [2024-10-16 09:32:08.042227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.719 [2024-10-16 09:32:08.046525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.719 [2024-10-16 09:32:08.046872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.719 [2024-10-16 09:32:08.046905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.719 [2024-10-16 09:32:08.051188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.719 [2024-10-16 09:32:08.051512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.719 [2024-10-16 09:32:08.051555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.719 [2024-10-16 09:32:08.055990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.719 [2024-10-16 09:32:08.056313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.719 [2024-10-16 09:32:08.056349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.719 [2024-10-16 09:32:08.060848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.719 [2024-10-16 09:32:08.061199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.719 [2024-10-16 09:32:08.061232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.719 [2024-10-16 09:32:08.065584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.719 [2024-10-16 09:32:08.065944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.719 [2024-10-16 09:32:08.065976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.719 [2024-10-16 09:32:08.070227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.719 [2024-10-16 09:32:08.070556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.719 [2024-10-16 09:32:08.070595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.719 [2024-10-16 09:32:08.075020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.719 [2024-10-16 09:32:08.075343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.719 [2024-10-16 09:32:08.075376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.719 [2024-10-16 09:32:08.079725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.719 [2024-10-16 09:32:08.080070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.719 [2024-10-16 09:32:08.080118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.719 [2024-10-16 09:32:08.084392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.719 [2024-10-16 09:32:08.084711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.719 [2024-10-16 09:32:08.084760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.719 [2024-10-16 09:32:08.089141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.719 [2024-10-16 09:32:08.089512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.719 [2024-10-16 09:32:08.089563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.719 [2024-10-16 09:32:08.093907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.719 [2024-10-16 09:32:08.094238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.719 [2024-10-16 09:32:08.094273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.719 [2024-10-16 09:32:08.098615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.719 [2024-10-16 09:32:08.098942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.719 [2024-10-16 09:32:08.098975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.719 [2024-10-16 09:32:08.103257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.719 [2024-10-16 09:32:08.103581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.719 [2024-10-16 09:32:08.103622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.719 [2024-10-16 09:32:08.107980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.719 [2024-10-16 09:32:08.108316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.719 [2024-10-16 09:32:08.108352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.719 [2024-10-16 09:32:08.112757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.719 [2024-10-16 09:32:08.113089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.719 [2024-10-16 09:32:08.113123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.719 [2024-10-16 09:32:08.117627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.719 [2024-10-16 09:32:08.117969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.719 [2024-10-16 09:32:08.118011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.983 [2024-10-16 09:32:08.122712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.983 [2024-10-16 09:32:08.123047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.983 [2024-10-16 09:32:08.123088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.983 [2024-10-16 09:32:08.127813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.983 [2024-10-16 09:32:08.128162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.983 [2024-10-16 09:32:08.128203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.983 [2024-10-16 09:32:08.132778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.983 [2024-10-16 09:32:08.133146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.983 [2024-10-16 09:32:08.133193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.983 [2024-10-16 09:32:08.137668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.983 [2024-10-16 09:32:08.138016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.983 [2024-10-16 09:32:08.138059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.983 [2024-10-16 09:32:08.142480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.983 [2024-10-16 09:32:08.142833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.983 [2024-10-16 09:32:08.142871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.983 [2024-10-16 09:32:08.147092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.983 [2024-10-16 09:32:08.147428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.983 [2024-10-16 09:32:08.147461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.983 [2024-10-16 09:32:08.151790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.983 [2024-10-16 09:32:08.152131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.983 [2024-10-16 09:32:08.152174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.983 [2024-10-16 09:32:08.156507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.983 [2024-10-16 09:32:08.156848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.983 [2024-10-16 09:32:08.156888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.983 [2024-10-16 09:32:08.161145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.983 [2024-10-16 09:32:08.161482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.983 [2024-10-16 09:32:08.161530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.983 [2024-10-16 09:32:08.165889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.983 [2024-10-16 09:32:08.166221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.983 [2024-10-16 09:32:08.166257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.983 [2024-10-16 09:32:08.170571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.983 [2024-10-16 09:32:08.170903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.983 [2024-10-16 09:32:08.170936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.983 [2024-10-16 09:32:08.175242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.983 [2024-10-16 09:32:08.175566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.983 [2024-10-16 09:32:08.175607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.983 [2024-10-16 09:32:08.179956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.983 [2024-10-16 09:32:08.180304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.983 [2024-10-16 09:32:08.180337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.983 6485.00 IOPS, 810.62 MiB/s [2024-10-16T09:32:08.387Z] [2024-10-16 09:32:08.185609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.983 [2024-10-16 09:32:08.185956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.983 [2024-10-16 09:32:08.186001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.983 [2024-10-16 09:32:08.190315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.983 [2024-10-16 09:32:08.190664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.983 [2024-10-16 09:32:08.190692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.983 [2024-10-16 09:32:08.195069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.983 [2024-10-16 09:32:08.195399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.983 [2024-10-16 09:32:08.195431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.983 [2024-10-16 09:32:08.199811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.983 [2024-10-16 09:32:08.200152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.983 [2024-10-16 09:32:08.200184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.983 [2024-10-16 09:32:08.204515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.983 [2024-10-16 09:32:08.204862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.983 [2024-10-16 09:32:08.204900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.983 [2024-10-16 09:32:08.209136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.983 [2024-10-16 09:32:08.209485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.983 [2024-10-16 09:32:08.209523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.983 [2024-10-16 09:32:08.213925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.983 [2024-10-16 09:32:08.214248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.983 [2024-10-16 09:32:08.214282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.983 [2024-10-16 09:32:08.218614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.983 [2024-10-16 09:32:08.218939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.983 [2024-10-16 09:32:08.218971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.983 [2024-10-16 09:32:08.223244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.983 [2024-10-16 09:32:08.223567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.983 [2024-10-16 09:32:08.223608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.983 [2024-10-16 09:32:08.227894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.983 [2024-10-16 09:32:08.228231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.984 [2024-10-16 09:32:08.228269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.984 [2024-10-16 09:32:08.232438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.984 [2024-10-16 09:32:08.232783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.984 [2024-10-16 09:32:08.232815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.984 [2024-10-16 09:32:08.237119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.984 [2024-10-16 09:32:08.237496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.984 [2024-10-16 09:32:08.237534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.984 [2024-10-16 09:32:08.241957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.984 [2024-10-16 09:32:08.242283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.984 [2024-10-16 09:32:08.242318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.984 [2024-10-16 09:32:08.246645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.984 [2024-10-16 09:32:08.246971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.984 [2024-10-16 09:32:08.247003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.984 [2024-10-16 09:32:08.251243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.984 [2024-10-16 09:32:08.251565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.984 [2024-10-16 09:32:08.251606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.984 [2024-10-16 09:32:08.255877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.984 [2024-10-16 09:32:08.256216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.984 [2024-10-16 09:32:08.256254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.984 [2024-10-16 09:32:08.261143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.984 [2024-10-16 09:32:08.261462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.984 [2024-10-16 09:32:08.261496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.984 [2024-10-16 09:32:08.266700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.984 [2024-10-16 09:32:08.267082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.984 [2024-10-16 09:32:08.267115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.984 [2024-10-16 09:32:08.272103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.984 [2024-10-16 09:32:08.272438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.984 [2024-10-16 09:32:08.272475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.984 [2024-10-16 09:32:08.277407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.984 [2024-10-16 09:32:08.277720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.984 [2024-10-16 09:32:08.277756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.984 [2024-10-16 09:32:08.282752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.984 [2024-10-16 09:32:08.283115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.984 [2024-10-16 09:32:08.283152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.984 [2024-10-16 09:32:08.288007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.984 [2024-10-16 09:32:08.288342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.984 [2024-10-16 09:32:08.288375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.984 [2024-10-16 09:32:08.293095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.984 [2024-10-16 09:32:08.293456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.984 [2024-10-16 09:32:08.293498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.984 [2024-10-16 09:32:08.298295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.984 [2024-10-16 09:32:08.298623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.984 [2024-10-16 09:32:08.298664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.984 [2024-10-16 09:32:08.303390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.984 [2024-10-16 09:32:08.303747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.984 [2024-10-16 09:32:08.303780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.984 [2024-10-16 09:32:08.308453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.984 [2024-10-16 09:32:08.308831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.984 [2024-10-16 09:32:08.308869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.984 [2024-10-16 09:32:08.313536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.984 [2024-10-16 09:32:08.313935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.984 [2024-10-16 09:32:08.313987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.984 [2024-10-16 09:32:08.318753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.984 [2024-10-16 09:32:08.319121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.984 [2024-10-16 09:32:08.319158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.984 [2024-10-16 09:32:08.323596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.984 [2024-10-16 09:32:08.323964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.984 [2024-10-16 09:32:08.324000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.984 [2024-10-16 09:32:08.328262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.984 [2024-10-16 09:32:08.328594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.984 [2024-10-16 09:32:08.328638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.984 [2024-10-16 09:32:08.333048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.984 [2024-10-16 09:32:08.333407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.984 [2024-10-16 09:32:08.333440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.984 [2024-10-16 09:32:08.337820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.984 [2024-10-16 09:32:08.338143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.984 [2024-10-16 09:32:08.338180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.984 [2024-10-16 09:32:08.342350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.984 [2024-10-16 09:32:08.342677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.984 [2024-10-16 09:32:08.342723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.984 [2024-10-16 09:32:08.347041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.984 [2024-10-16 09:32:08.347374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.984 [2024-10-16 09:32:08.347409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.984 [2024-10-16 09:32:08.351689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.984 [2024-10-16 09:32:08.352041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.984 [2024-10-16 09:32:08.352073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.984 [2024-10-16 09:32:08.356391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.984 [2024-10-16 09:32:08.356724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.984 [2024-10-16 09:32:08.356770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.984 [2024-10-16 09:32:08.361105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.984 [2024-10-16 09:32:08.361458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.984 [2024-10-16 09:32:08.361500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.984 [2024-10-16 09:32:08.365900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.985 [2024-10-16 09:32:08.366224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.985 [2024-10-16 09:32:08.366260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:43.985 [2024-10-16 09:32:08.370581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.985 [2024-10-16 09:32:08.370904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.985 [2024-10-16 09:32:08.370939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.985 [2024-10-16 09:32:08.375242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.985 [2024-10-16 09:32:08.375568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.985 [2024-10-16 09:32:08.375609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.985 [2024-10-16 09:32:08.380003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.985 [2024-10-16 09:32:08.380330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.985 [2024-10-16 09:32:08.380365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:43.985 [2024-10-16 09:32:08.385071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:43.985 [2024-10-16 09:32:08.385437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.985 [2024-10-16 09:32:08.385476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.244 [2024-10-16 09:32:08.390148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.244 [2024-10-16 09:32:08.390476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.244 [2024-10-16 09:32:08.390510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.244 [2024-10-16 09:32:08.395205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.244 [2024-10-16 09:32:08.395527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.244 [2024-10-16 09:32:08.395568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.244 [2024-10-16 09:32:08.399873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.244 [2024-10-16 09:32:08.400195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.244 [2024-10-16 09:32:08.400232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.244 [2024-10-16 09:32:08.404424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.244 [2024-10-16 09:32:08.404770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.244 [2024-10-16 09:32:08.404804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.244 [2024-10-16 09:32:08.409066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.244 [2024-10-16 09:32:08.409438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.244 [2024-10-16 09:32:08.409479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.244 [2024-10-16 09:32:08.413824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.245 [2024-10-16 09:32:08.414147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.245 [2024-10-16 09:32:08.414179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.245 [2024-10-16 09:32:08.418517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.245 [2024-10-16 09:32:08.418856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.245 [2024-10-16 09:32:08.418895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.245 [2024-10-16 09:32:08.423229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.245 [2024-10-16 09:32:08.423556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.245 [2024-10-16 09:32:08.423597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.245 [2024-10-16 09:32:08.427990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.245 [2024-10-16 09:32:08.428317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.245 [2024-10-16 09:32:08.428349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.245 [2024-10-16 09:32:08.432645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.245 [2024-10-16 09:32:08.432972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.245 [2024-10-16 09:32:08.433004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.245 [2024-10-16 09:32:08.437298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.245 [2024-10-16 09:32:08.437715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.245 [2024-10-16 09:32:08.437752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.245 [2024-10-16 09:32:08.442095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.245 [2024-10-16 09:32:08.442417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.245 [2024-10-16 09:32:08.442456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.245 [2024-10-16 09:32:08.446820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.245 [2024-10-16 09:32:08.447146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.245 [2024-10-16 09:32:08.447178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.245 [2024-10-16 09:32:08.451472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.245 [2024-10-16 09:32:08.451832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.245 [2024-10-16 09:32:08.451868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.245 [2024-10-16 09:32:08.455960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.245 [2024-10-16 09:32:08.456291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.245 [2024-10-16 09:32:08.456324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.245 [2024-10-16 09:32:08.460821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.245 [2024-10-16 09:32:08.461209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.245 [2024-10-16 09:32:08.461247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.245 [2024-10-16 09:32:08.465736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.245 [2024-10-16 09:32:08.466094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.245 [2024-10-16 09:32:08.466132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.245 [2024-10-16 09:32:08.470481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.245 [2024-10-16 09:32:08.470849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.245 [2024-10-16 09:32:08.470887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.245 [2024-10-16 09:32:08.475134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.245 [2024-10-16 09:32:08.475468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.245 [2024-10-16 09:32:08.475503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.245 [2024-10-16 09:32:08.479779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.245 [2024-10-16 09:32:08.480099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.245 [2024-10-16 09:32:08.480137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.245 [2024-10-16 09:32:08.484223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.245 [2024-10-16 09:32:08.484554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.245 [2024-10-16 09:32:08.484595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.245 [2024-10-16 09:32:08.488907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.245 [2024-10-16 09:32:08.489280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.245 [2024-10-16 09:32:08.489315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.245 [2024-10-16 09:32:08.493707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.245 [2024-10-16 09:32:08.494030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.245 [2024-10-16 09:32:08.494065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.245 [2024-10-16 09:32:08.498332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.245 [2024-10-16 09:32:08.498667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.245 [2024-10-16 09:32:08.498714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.245 [2024-10-16 09:32:08.503112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.245 [2024-10-16 09:32:08.503437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.245 [2024-10-16 09:32:08.503470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.245 [2024-10-16 09:32:08.507817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.245 [2024-10-16 09:32:08.508128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.245 [2024-10-16 09:32:08.508165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.245 [2024-10-16 09:32:08.512376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.245 [2024-10-16 09:32:08.512719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.245 [2024-10-16 09:32:08.512758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.245 [2024-10-16 09:32:08.517001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.245 [2024-10-16 09:32:08.517375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.245 [2024-10-16 09:32:08.517411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.245 [2024-10-16 09:32:08.522271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.245 [2024-10-16 09:32:08.522614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.245 [2024-10-16 09:32:08.522656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.245 [2024-10-16 09:32:08.527513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.245 [2024-10-16 09:32:08.527849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.245 [2024-10-16 09:32:08.527883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.245 [2024-10-16 09:32:08.532201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.245 [2024-10-16 09:32:08.532524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.245 [2024-10-16 09:32:08.532569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.245 [2024-10-16 09:32:08.536934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.245 [2024-10-16 09:32:08.537280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.245 [2024-10-16 09:32:08.537327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.245 [2024-10-16 09:32:08.541713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.245 [2024-10-16 09:32:08.542036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.245 [2024-10-16 09:32:08.542071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.245 [2024-10-16 09:32:08.546371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.245 [2024-10-16 09:32:08.546690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.245 [2024-10-16 09:32:08.546742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.246 [2024-10-16 09:32:08.551121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.246 [2024-10-16 09:32:08.551452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.246 [2024-10-16 09:32:08.551486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.246 [2024-10-16 09:32:08.555847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.246 [2024-10-16 09:32:08.556156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.246 [2024-10-16 09:32:08.556193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.246 [2024-10-16 09:32:08.560581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.246 [2024-10-16 09:32:08.560897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.246 [2024-10-16 09:32:08.560931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.246 [2024-10-16 09:32:08.565151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.246 [2024-10-16 09:32:08.565518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.246 [2024-10-16 09:32:08.565562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.246 [2024-10-16 09:32:08.569888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.246 [2024-10-16 09:32:08.570216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.246 [2024-10-16 09:32:08.570248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.246 [2024-10-16 09:32:08.574623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.246 [2024-10-16 09:32:08.574945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.246 [2024-10-16 09:32:08.574979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.246 [2024-10-16 09:32:08.579315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.246 [2024-10-16 09:32:08.579649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.246 [2024-10-16 09:32:08.579696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.246 [2024-10-16 09:32:08.584003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.246 [2024-10-16 09:32:08.584329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.246 [2024-10-16 09:32:08.584364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.246 [2024-10-16 09:32:08.588577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.246 [2024-10-16 09:32:08.588899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.246 [2024-10-16 09:32:08.588934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.246 [2024-10-16 09:32:08.593189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.246 [2024-10-16 09:32:08.593514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.246 [2024-10-16 09:32:08.593567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.246 [2024-10-16 09:32:08.597837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.246 [2024-10-16 09:32:08.598161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.246 [2024-10-16 09:32:08.598194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.246 [2024-10-16 09:32:08.602501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.246 [2024-10-16 09:32:08.602837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.246 [2024-10-16 09:32:08.602879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.246 [2024-10-16 09:32:08.607212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.246 [2024-10-16 09:32:08.607537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.246 [2024-10-16 09:32:08.607581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.246 [2024-10-16 09:32:08.611934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.246 [2024-10-16 09:32:08.612258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.246 [2024-10-16 09:32:08.612293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.246 [2024-10-16 09:32:08.616673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.246 [2024-10-16 09:32:08.617020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.246 [2024-10-16 09:32:08.617064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.246 [2024-10-16 09:32:08.621340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.246 [2024-10-16 09:32:08.621761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.246 [2024-10-16 09:32:08.621797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.246 [2024-10-16 09:32:08.626178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.246 [2024-10-16 09:32:08.626512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.246 [2024-10-16 09:32:08.626569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.246 [2024-10-16 09:32:08.630992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.246 [2024-10-16 09:32:08.631325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.246 [2024-10-16 09:32:08.631357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.246 [2024-10-16 09:32:08.635650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.246 [2024-10-16 09:32:08.636005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.246 [2024-10-16 09:32:08.636054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.246 [2024-10-16 09:32:08.640203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.246 [2024-10-16 09:32:08.640543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.246 [2024-10-16 09:32:08.640590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.246 [2024-10-16 09:32:08.644881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.246 [2024-10-16 09:32:08.645265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.246 [2024-10-16 09:32:08.645304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.506 [2024-10-16 09:32:08.650104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.507 [2024-10-16 09:32:08.650452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.507 [2024-10-16 09:32:08.650487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.507 [2024-10-16 09:32:08.655154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.507 [2024-10-16 09:32:08.655509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.507 [2024-10-16 09:32:08.655550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.507 [2024-10-16 09:32:08.659766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.507 [2024-10-16 09:32:08.660109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.507 [2024-10-16 09:32:08.660145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.507 [2024-10-16 09:32:08.664472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.507 [2024-10-16 09:32:08.664834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.507 [2024-10-16 09:32:08.664871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.507 [2024-10-16 09:32:08.669077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.507 [2024-10-16 09:32:08.669437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.507 [2024-10-16 09:32:08.669481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.507 [2024-10-16 09:32:08.673707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.507 [2024-10-16 09:32:08.674043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.507 [2024-10-16 09:32:08.674074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.507 [2024-10-16 09:32:08.678290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.507 [2024-10-16 09:32:08.678621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.507 [2024-10-16 09:32:08.678679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.507 [2024-10-16 09:32:08.682901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.507 [2024-10-16 09:32:08.683232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.507 [2024-10-16 09:32:08.683267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.507 [2024-10-16 09:32:08.687637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.507 [2024-10-16 09:32:08.687951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.507 [2024-10-16 09:32:08.687985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.507 [2024-10-16 09:32:08.692315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.507 [2024-10-16 09:32:08.692638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.507 [2024-10-16 09:32:08.692675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.507 [2024-10-16 09:32:08.696990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.507 [2024-10-16 09:32:08.697347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.507 [2024-10-16 09:32:08.697380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.507 [2024-10-16 09:32:08.701682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.507 [2024-10-16 09:32:08.702010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.507 [2024-10-16 09:32:08.702041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.507 [2024-10-16 09:32:08.706370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.507 [2024-10-16 09:32:08.706713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.507 [2024-10-16 09:32:08.706747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.507 [2024-10-16 09:32:08.711121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.507 [2024-10-16 09:32:08.711448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.507 [2024-10-16 09:32:08.711483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.507 [2024-10-16 09:32:08.715796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.507 [2024-10-16 09:32:08.716123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.507 [2024-10-16 09:32:08.716160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.507 [2024-10-16 09:32:08.720541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.507 [2024-10-16 09:32:08.720879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.507 [2024-10-16 09:32:08.720921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.507 [2024-10-16 09:32:08.725146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.507 [2024-10-16 09:32:08.725498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.507 [2024-10-16 09:32:08.725531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.507 [2024-10-16 09:32:08.729899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.507 [2024-10-16 09:32:08.730224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.507 [2024-10-16 09:32:08.730257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.507 [2024-10-16 09:32:08.734599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.507 [2024-10-16 09:32:08.734923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.507 [2024-10-16 09:32:08.734956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.507 [2024-10-16 09:32:08.739242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.507 [2024-10-16 09:32:08.739566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.507 [2024-10-16 09:32:08.739609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.507 [2024-10-16 09:32:08.743876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.507 [2024-10-16 09:32:08.744211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.507 [2024-10-16 09:32:08.744249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.507 [2024-10-16 09:32:08.748394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.507 [2024-10-16 09:32:08.748723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.507 [2024-10-16 09:32:08.748773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.507 [2024-10-16 09:32:08.753023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.507 [2024-10-16 09:32:08.753369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.507 [2024-10-16 09:32:08.753402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.507 [2024-10-16 09:32:08.757783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.507 [2024-10-16 09:32:08.758110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.507 [2024-10-16 09:32:08.758142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.507 [2024-10-16 09:32:08.762486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.507 [2024-10-16 09:32:08.762832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.507 [2024-10-16 09:32:08.762870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.507 [2024-10-16 09:32:08.767122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.507 [2024-10-16 09:32:08.767449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.507 [2024-10-16 09:32:08.767482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.507 [2024-10-16 09:32:08.771804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.507 [2024-10-16 09:32:08.772158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.507 [2024-10-16 09:32:08.772196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.507 [2024-10-16 09:32:08.776607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.507 [2024-10-16 09:32:08.776997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.507 [2024-10-16 09:32:08.777034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.507 [2024-10-16 09:32:08.781853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.507 [2024-10-16 09:32:08.782198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.508 [2024-10-16 09:32:08.782230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.508 [2024-10-16 09:32:08.786947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.508 [2024-10-16 09:32:08.787282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.508 [2024-10-16 09:32:08.787314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.508 [2024-10-16 09:32:08.791677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.508 [2024-10-16 09:32:08.792026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.508 [2024-10-16 09:32:08.792067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.508 [2024-10-16 09:32:08.796336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.508 [2024-10-16 09:32:08.796677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.508 [2024-10-16 09:32:08.796724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.508 [2024-10-16 09:32:08.801061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.508 [2024-10-16 09:32:08.801397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.508 [2024-10-16 09:32:08.801430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.508 [2024-10-16 09:32:08.805807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.508 [2024-10-16 09:32:08.806141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.508 [2024-10-16 09:32:08.806173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.508 [2024-10-16 09:32:08.810527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.508 [2024-10-16 09:32:08.810869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.508 [2024-10-16 09:32:08.810906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.508 [2024-10-16 09:32:08.815349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.508 [2024-10-16 09:32:08.815688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.508 [2024-10-16 09:32:08.815720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.508 [2024-10-16 09:32:08.820079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.508 [2024-10-16 09:32:08.820414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.508 [2024-10-16 09:32:08.820460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.508 [2024-10-16 09:32:08.824744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.508 [2024-10-16 09:32:08.825075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.508 [2024-10-16 09:32:08.825107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.508 [2024-10-16 09:32:08.829340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.508 [2024-10-16 09:32:08.829740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.508 [2024-10-16 09:32:08.829777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.508 [2024-10-16 09:32:08.834120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.508 [2024-10-16 09:32:08.834444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.508 [2024-10-16 09:32:08.834477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.508 [2024-10-16 09:32:08.838922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.508 [2024-10-16 09:32:08.839252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.508 [2024-10-16 09:32:08.839285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.508 [2024-10-16 09:32:08.843639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.508 [2024-10-16 09:32:08.843966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.508 [2024-10-16 09:32:08.844001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.508 [2024-10-16 09:32:08.848282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.508 [2024-10-16 09:32:08.848606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.508 [2024-10-16 09:32:08.848650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.508 [2024-10-16 09:32:08.852998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.508 [2024-10-16 09:32:08.853344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.508 [2024-10-16 09:32:08.853380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.508 [2024-10-16 09:32:08.857710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.508 [2024-10-16 09:32:08.858033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.508 [2024-10-16 09:32:08.858065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.508 [2024-10-16 09:32:08.862399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.508 [2024-10-16 09:32:08.862738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.508 [2024-10-16 09:32:08.862772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.508 [2024-10-16 09:32:08.867201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.508 [2024-10-16 09:32:08.867527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.508 [2024-10-16 09:32:08.867572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.508 [2024-10-16 09:32:08.871932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.508 [2024-10-16 09:32:08.872256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.508 [2024-10-16 09:32:08.872288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.508 [2024-10-16 09:32:08.876629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.508 [2024-10-16 09:32:08.876945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.508 [2024-10-16 09:32:08.876980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.508 [2024-10-16 09:32:08.881310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.508 [2024-10-16 09:32:08.881701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.508 [2024-10-16 09:32:08.881738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.508 [2024-10-16 09:32:08.886063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.508 [2024-10-16 09:32:08.886387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.508 [2024-10-16 09:32:08.886422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.508 [2024-10-16 09:32:08.890785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.508 [2024-10-16 09:32:08.891113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.508 [2024-10-16 09:32:08.891146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.508 [2024-10-16 09:32:08.895490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.508 [2024-10-16 09:32:08.895837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.508 [2024-10-16 09:32:08.895877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.508 [2024-10-16 09:32:08.900160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.508 [2024-10-16 09:32:08.900485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.508 [2024-10-16 09:32:08.900517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.508 [2024-10-16 09:32:08.904842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.508 [2024-10-16 09:32:08.905197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.508 [2024-10-16 09:32:08.905235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.508 [2024-10-16 09:32:08.910045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.508 [2024-10-16 09:32:08.910387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.508 [2024-10-16 09:32:08.910444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.769 [2024-10-16 09:32:08.915096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.769 [2024-10-16 09:32:08.915423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.769 [2024-10-16 09:32:08.915472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.769 [2024-10-16 09:32:08.920085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.769 [2024-10-16 09:32:08.920413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.769 [2024-10-16 09:32:08.920448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.769 [2024-10-16 09:32:08.924815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.769 [2024-10-16 09:32:08.925181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.769 [2024-10-16 09:32:08.925214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.769 [2024-10-16 09:32:08.929494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.769 [2024-10-16 09:32:08.929884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.769 [2024-10-16 09:32:08.929920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.769 [2024-10-16 09:32:08.934701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.769 [2024-10-16 09:32:08.935052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.769 [2024-10-16 09:32:08.935086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.769 [2024-10-16 09:32:08.939715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.769 [2024-10-16 09:32:08.940066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.769 [2024-10-16 09:32:08.940110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.769 [2024-10-16 09:32:08.945043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.769 [2024-10-16 09:32:08.945403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.769 [2024-10-16 09:32:08.945443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.769 [2024-10-16 09:32:08.950198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.769 [2024-10-16 09:32:08.950521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.769 [2024-10-16 09:32:08.950581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.769 [2024-10-16 09:32:08.955342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.769 [2024-10-16 09:32:08.955679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.769 [2024-10-16 09:32:08.955713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.769 [2024-10-16 09:32:08.960411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.769 [2024-10-16 09:32:08.960772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.769 [2024-10-16 09:32:08.960819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.769 [2024-10-16 09:32:08.965741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.769 [2024-10-16 09:32:08.966112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.769 [2024-10-16 09:32:08.966151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.769 [2024-10-16 09:32:08.970723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.769 [2024-10-16 09:32:08.971069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.769 [2024-10-16 09:32:08.971113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.769 [2024-10-16 09:32:08.975789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.769 [2024-10-16 09:32:08.976140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.769 [2024-10-16 09:32:08.976173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.769 [2024-10-16 09:32:08.980753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.769 [2024-10-16 09:32:08.981099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.769 [2024-10-16 09:32:08.981136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.769 [2024-10-16 09:32:08.985651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.769 [2024-10-16 09:32:08.985982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.769 [2024-10-16 09:32:08.986014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.769 [2024-10-16 09:32:08.990507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.769 [2024-10-16 09:32:08.990851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.769 [2024-10-16 09:32:08.990886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.769 [2024-10-16 09:32:08.995249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.769 [2024-10-16 09:32:08.995571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.769 [2024-10-16 09:32:08.995614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.769 [2024-10-16 09:32:09.000039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.769 [2024-10-16 09:32:09.000371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.769 [2024-10-16 09:32:09.000418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.769 [2024-10-16 09:32:09.004865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.769 [2024-10-16 09:32:09.005237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.769 [2024-10-16 09:32:09.005275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.769 [2024-10-16 09:32:09.009790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.769 [2024-10-16 09:32:09.010123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.770 [2024-10-16 09:32:09.010169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.770 [2024-10-16 09:32:09.014603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.770 [2024-10-16 09:32:09.014938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.770 [2024-10-16 09:32:09.014983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.770 [2024-10-16 09:32:09.019362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.770 [2024-10-16 09:32:09.019705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.770 [2024-10-16 09:32:09.019739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.770 [2024-10-16 09:32:09.024247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.770 [2024-10-16 09:32:09.024577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.770 [2024-10-16 09:32:09.024620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.770 [2024-10-16 09:32:09.028980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.770 [2024-10-16 09:32:09.029340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.770 [2024-10-16 09:32:09.029384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.770 [2024-10-16 09:32:09.033710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.770 [2024-10-16 09:32:09.034070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.770 [2024-10-16 09:32:09.034109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.770 [2024-10-16 09:32:09.039041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.770 [2024-10-16 09:32:09.039402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.770 [2024-10-16 09:32:09.039441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.770 [2024-10-16 09:32:09.044203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.770 [2024-10-16 09:32:09.044548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.770 [2024-10-16 09:32:09.044590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.770 [2024-10-16 09:32:09.049028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.770 [2024-10-16 09:32:09.049371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.770 [2024-10-16 09:32:09.049418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.770 [2024-10-16 09:32:09.054060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.770 [2024-10-16 09:32:09.054399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.770 [2024-10-16 09:32:09.054434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.770 [2024-10-16 09:32:09.058805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.770 [2024-10-16 09:32:09.059146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.770 [2024-10-16 09:32:09.059179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.770 [2024-10-16 09:32:09.063608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.770 [2024-10-16 09:32:09.063940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.770 [2024-10-16 09:32:09.063973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.770 [2024-10-16 09:32:09.068607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.770 [2024-10-16 09:32:09.068949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.770 [2024-10-16 09:32:09.068994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.770 [2024-10-16 09:32:09.073319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.770 [2024-10-16 09:32:09.073731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.770 [2024-10-16 09:32:09.073769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.770 [2024-10-16 09:32:09.078102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.770 [2024-10-16 09:32:09.078443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.770 [2024-10-16 09:32:09.078480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.770 [2024-10-16 09:32:09.082952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.770 [2024-10-16 09:32:09.083291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.770 [2024-10-16 09:32:09.083325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.770 [2024-10-16 09:32:09.087739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.770 [2024-10-16 09:32:09.088069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.770 [2024-10-16 09:32:09.088115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.770 [2024-10-16 09:32:09.092690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.770 [2024-10-16 09:32:09.093034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.770 [2024-10-16 09:32:09.093071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.770 [2024-10-16 09:32:09.097326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.770 [2024-10-16 09:32:09.097748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.770 [2024-10-16 09:32:09.097786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.770 [2024-10-16 09:32:09.102137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.770 [2024-10-16 09:32:09.102469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.770 [2024-10-16 09:32:09.102505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.770 [2024-10-16 09:32:09.107060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.770 [2024-10-16 09:32:09.107392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.770 [2024-10-16 09:32:09.107425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.770 [2024-10-16 09:32:09.112019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.770 [2024-10-16 09:32:09.112344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.770 [2024-10-16 09:32:09.112376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.770 [2024-10-16 09:32:09.116834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.770 [2024-10-16 09:32:09.117187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.770 [2024-10-16 09:32:09.117221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.770 [2024-10-16 09:32:09.121431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.770 [2024-10-16 09:32:09.121771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.770 [2024-10-16 09:32:09.121808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.770 [2024-10-16 09:32:09.126100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.770 [2024-10-16 09:32:09.126424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.770 [2024-10-16 09:32:09.126459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.770 [2024-10-16 09:32:09.130767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.770 [2024-10-16 09:32:09.131091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.770 [2024-10-16 09:32:09.131124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.770 [2024-10-16 09:32:09.135526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.770 [2024-10-16 09:32:09.135862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.770 [2024-10-16 09:32:09.135893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.770 [2024-10-16 09:32:09.140172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.770 [2024-10-16 09:32:09.140500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.770 [2024-10-16 09:32:09.140535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.770 [2024-10-16 09:32:09.144975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.770 [2024-10-16 09:32:09.145347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.770 [2024-10-16 09:32:09.145387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.770 [2024-10-16 09:32:09.149715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.771 [2024-10-16 09:32:09.150068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.771 [2024-10-16 09:32:09.150103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:44.771 [2024-10-16 09:32:09.154321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.771 [2024-10-16 09:32:09.154666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.771 [2024-10-16 09:32:09.154701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.771 [2024-10-16 09:32:09.159039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.771 [2024-10-16 09:32:09.159375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.771 [2024-10-16 09:32:09.159408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:44.771 [2024-10-16 09:32:09.163815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.771 [2024-10-16 09:32:09.164133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.771 [2024-10-16 09:32:09.164166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:44.771 [2024-10-16 09:32:09.168560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:44.771 [2024-10-16 09:32:09.168934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.771 [2024-10-16 09:32:09.168974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.030 [2024-10-16 09:32:09.173783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:45.030 [2024-10-16 09:32:09.174205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.030 [2024-10-16 09:32:09.174243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.030 [2024-10-16 09:32:09.178781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:45.030 [2024-10-16 09:32:09.179137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.030 [2024-10-16 09:32:09.179175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.030 6471.00 IOPS, 808.88 MiB/s [2024-10-16T09:32:09.434Z] [2024-10-16 09:32:09.184724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdf13f0) with pdu=0x2000166fef90 00:17:45.030 [2024-10-16 09:32:09.184815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.030 [2024-10-16 09:32:09.184836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.030 00:17:45.030 Latency(us) 00:17:45.030 [2024-10-16T09:32:09.434Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.030 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:45.030 nvme0n1 : 2.00 6467.23 808.40 0.00 0.00 2468.26 1474.56 6315.29 00:17:45.030 [2024-10-16T09:32:09.434Z] =================================================================================================================== 00:17:45.030 [2024-10-16T09:32:09.434Z] Total : 6467.23 808.40 0.00 0.00 2468.26 1474.56 6315.29 00:17:45.030 { 00:17:45.030 "results": [ 00:17:45.030 { 00:17:45.030 "job": "nvme0n1", 00:17:45.030 "core_mask": "0x2", 00:17:45.030 "workload": "randwrite", 00:17:45.030 "status": "finished", 00:17:45.030 "queue_depth": 16, 00:17:45.030 "io_size": 131072, 00:17:45.030 "runtime": 2.003796, 00:17:45.030 "iops": 6467.225206557953, 00:17:45.030 "mibps": 808.4031508197442, 00:17:45.030 "io_failed": 0, 00:17:45.030 "io_timeout": 0, 00:17:45.030 "avg_latency_us": 2468.256858203144, 00:17:45.030 "min_latency_us": 1474.56, 00:17:45.030 "max_latency_us": 6315.2872727272725 00:17:45.030 } 00:17:45.030 ], 00:17:45.030 "core_count": 1 00:17:45.030 } 00:17:45.030 09:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:45.030 09:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:45.030 09:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:45.030 | .driver_specific 00:17:45.030 | .nvme_error 00:17:45.030 | .status_code 00:17:45.030 | .command_transient_transport_error' 00:17:45.030 09:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:45.290 09:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 418 > 0 )) 00:17:45.290 09:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79804 00:17:45.290 09:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 79804 ']' 00:17:45.290 09:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 79804 00:17:45.290 09:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:17:45.290 09:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:45.290 09:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79804 00:17:45.290 09:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:45.290 09:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:45.290 killing process with pid 79804 00:17:45.290 09:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79804' 00:17:45.290 Received shutdown signal, test time was about 2.000000 seconds 00:17:45.290 00:17:45.290 Latency(us) 00:17:45.290 [2024-10-16T09:32:09.694Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.290 [2024-10-16T09:32:09.694Z] =================================================================================================================== 00:17:45.290 [2024-10-16T09:32:09.694Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:45.290 09:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 79804 00:17:45.290 09:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 79804 00:17:45.549 09:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 79628 00:17:45.549 09:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 79628 ']' 00:17:45.549 09:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 79628 00:17:45.549 09:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:17:45.549 09:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:45.549 09:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79628 00:17:45.549 09:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:45.549 09:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:45.549 killing process with pid 79628 00:17:45.549 09:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79628' 00:17:45.549 09:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 79628 00:17:45.549 09:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 79628 00:17:45.549 00:17:45.549 real 0m15.838s 00:17:45.549 user 0m30.805s 00:17:45.549 sys 0m4.682s 00:17:45.549 09:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:45.549 09:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:45.549 ************************************ 00:17:45.549 END TEST nvmf_digest_error 00:17:45.549 ************************************ 00:17:45.808 09:32:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:17:45.808 09:32:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:17:45.808 09:32:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:45.808 09:32:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:17:45.808 09:32:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:45.808 09:32:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:17:45.808 09:32:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:45.808 09:32:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:45.808 rmmod nvme_tcp 00:17:45.808 rmmod nvme_fabrics 00:17:45.808 rmmod nvme_keyring 00:17:45.808 09:32:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:45.808 09:32:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:17:45.808 09:32:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:17:45.808 09:32:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@515 -- # '[' -n 79628 ']' 00:17:45.808 09:32:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # killprocess 79628 00:17:45.808 09:32:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 79628 ']' 00:17:45.808 09:32:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 79628 00:17:45.808 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (79628) - No such process 00:17:45.808 Process with pid 79628 is not found 00:17:45.809 09:32:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 79628 is not found' 00:17:45.809 09:32:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:45.809 09:32:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:17:45.809 09:32:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:17:45.809 09:32:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:17:45.809 09:32:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-save 00:17:45.809 09:32:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:17:45.809 09:32:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-restore 00:17:45.809 09:32:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:45.809 09:32:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:45.809 09:32:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:45.809 09:32:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:45.809 09:32:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:45.809 09:32:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:45.809 09:32:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:45.809 09:32:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:45.809 09:32:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:45.809 09:32:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:45.809 09:32:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:46.068 09:32:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:46.068 09:32:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:46.068 09:32:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:46.068 09:32:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:46.068 09:32:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:46.068 09:32:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:46.068 09:32:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:46.068 09:32:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:46.068 09:32:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:17:46.068 00:17:46.068 real 0m32.036s 00:17:46.068 user 1m0.258s 00:17:46.068 sys 0m9.658s 00:17:46.068 09:32:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:46.068 09:32:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:46.068 ************************************ 00:17:46.068 END TEST nvmf_digest 00:17:46.068 ************************************ 00:17:46.068 09:32:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:17:46.068 09:32:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:17:46.068 09:32:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:46.068 09:32:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:46.068 09:32:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:46.068 09:32:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.068 ************************************ 00:17:46.068 START TEST nvmf_host_multipath 00:17:46.068 ************************************ 00:17:46.068 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:46.068 * Looking for test storage... 00:17:46.068 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:46.068 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:46.068 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:17:46.068 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:46.328 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:46.328 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:46.328 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:46.328 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:46.328 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:17:46.328 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:17:46.328 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:17:46.328 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:17:46.328 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:17:46.328 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:17:46.328 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:17:46.328 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:46.328 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:17:46.328 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:17:46.328 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:46.328 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:46.328 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:17:46.328 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:17:46.328 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:46.328 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:17:46.328 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:17:46.328 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:17:46.328 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:17:46.328 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:46.328 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:17:46.328 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:17:46.328 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:46.328 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:46.328 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:17:46.328 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:46.328 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:46.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:46.328 --rc genhtml_branch_coverage=1 00:17:46.328 --rc genhtml_function_coverage=1 00:17:46.328 --rc genhtml_legend=1 00:17:46.328 --rc geninfo_all_blocks=1 00:17:46.328 --rc geninfo_unexecuted_blocks=1 00:17:46.328 00:17:46.328 ' 00:17:46.328 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:46.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:46.328 --rc genhtml_branch_coverage=1 00:17:46.328 --rc genhtml_function_coverage=1 00:17:46.328 --rc genhtml_legend=1 00:17:46.328 --rc geninfo_all_blocks=1 00:17:46.328 --rc geninfo_unexecuted_blocks=1 00:17:46.328 00:17:46.328 ' 00:17:46.328 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:46.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:46.328 --rc genhtml_branch_coverage=1 00:17:46.328 --rc genhtml_function_coverage=1 00:17:46.328 --rc genhtml_legend=1 00:17:46.328 --rc geninfo_all_blocks=1 00:17:46.328 --rc geninfo_unexecuted_blocks=1 00:17:46.328 00:17:46.328 ' 00:17:46.328 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:46.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:46.328 --rc genhtml_branch_coverage=1 00:17:46.328 --rc genhtml_function_coverage=1 00:17:46.328 --rc genhtml_legend=1 00:17:46.328 --rc geninfo_all_blocks=1 00:17:46.328 --rc geninfo_unexecuted_blocks=1 00:17:46.328 00:17:46.328 ' 00:17:46.328 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:46.328 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:17:46.328 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:46.328 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:46.328 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:46.328 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:46.328 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:46.328 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:46.328 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:46.328 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:46.328 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:46.328 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:46.328 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:17:46.328 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5989d9e2-d339-420e-a2f4-bd87604f111f 00:17:46.328 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:46.329 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@458 -- # nvmf_veth_init 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:46.329 Cannot find device "nvmf_init_br" 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:46.329 Cannot find device "nvmf_init_br2" 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:46.329 Cannot find device "nvmf_tgt_br" 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:46.329 Cannot find device "nvmf_tgt_br2" 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:46.329 Cannot find device "nvmf_init_br" 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:46.329 Cannot find device "nvmf_init_br2" 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:46.329 Cannot find device "nvmf_tgt_br" 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:46.329 Cannot find device "nvmf_tgt_br2" 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:46.329 Cannot find device "nvmf_br" 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:46.329 Cannot find device "nvmf_init_if" 00:17:46.329 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:17:46.330 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:46.330 Cannot find device "nvmf_init_if2" 00:17:46.330 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:17:46.330 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:46.589 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:46.589 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:17:46.589 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:46.589 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:46.589 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:17:46.589 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:46.589 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:46.589 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:46.589 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:46.589 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:46.589 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:46.589 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:46.589 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:46.589 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:46.589 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:46.589 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:46.589 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:46.589 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:46.589 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:46.589 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:46.589 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:46.589 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:46.589 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:46.589 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:46.589 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:46.589 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:46.589 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:46.589 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:46.589 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:46.589 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:46.589 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:46.589 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:46.589 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:46.589 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:46.589 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:46.589 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:46.589 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:46.589 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:46.589 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:46.589 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:17:46.589 00:17:46.589 --- 10.0.0.3 ping statistics --- 00:17:46.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.589 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:17:46.589 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:46.589 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:46.589 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:17:46.589 00:17:46.589 --- 10.0.0.4 ping statistics --- 00:17:46.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.589 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:17:46.589 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:46.589 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:46.589 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:17:46.589 00:17:46.589 --- 10.0.0.1 ping statistics --- 00:17:46.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.589 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:17:46.589 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:46.589 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:46.589 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:17:46.589 00:17:46.589 --- 10.0.0.2 ping statistics --- 00:17:46.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.589 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:17:46.589 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:46.589 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # return 0 00:17:46.589 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:46.589 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:46.589 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:17:46.589 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:17:46.589 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:46.589 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:17:46.589 09:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:17:46.848 09:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:17:46.848 09:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:46.848 09:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:46.848 09:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:46.848 09:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # nvmfpid=80120 00:17:46.848 09:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:46.848 09:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # waitforlisten 80120 00:17:46.848 09:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 80120 ']' 00:17:46.848 09:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.848 09:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:46.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.848 09:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.848 09:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:46.848 09:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:46.848 [2024-10-16 09:32:11.070316] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:17:46.848 [2024-10-16 09:32:11.070399] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:46.848 [2024-10-16 09:32:11.209075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:47.107 [2024-10-16 09:32:11.260939] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:47.107 [2024-10-16 09:32:11.261237] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:47.107 [2024-10-16 09:32:11.261379] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:47.107 [2024-10-16 09:32:11.261396] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:47.107 [2024-10-16 09:32:11.261405] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:47.107 [2024-10-16 09:32:11.264575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:47.107 [2024-10-16 09:32:11.264613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.107 [2024-10-16 09:32:11.321284] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:47.107 09:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:47.107 09:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:17:47.107 09:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:47.107 09:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:47.107 09:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:47.107 09:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:47.107 09:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=80120 00:17:47.107 09:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:47.366 [2024-10-16 09:32:11.723478] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:47.366 09:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:47.625 Malloc0 00:17:47.625 09:32:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:17:48.192 09:32:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:48.192 09:32:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:48.452 [2024-10-16 09:32:12.772929] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:48.452 09:32:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:17:48.711 [2024-10-16 09:32:12.993006] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:17:48.711 09:32:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=80168 00:17:48.711 09:32:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:17:48.711 09:32:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:48.711 09:32:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 80168 /var/tmp/bdevperf.sock 00:17:48.711 09:32:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 80168 ']' 00:17:48.711 09:32:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:48.711 09:32:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:48.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:48.711 09:32:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:48.711 09:32:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:48.711 09:32:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:48.969 09:32:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:48.969 09:32:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:17:48.969 09:32:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:49.536 09:32:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:49.795 Nvme0n1 00:17:49.795 09:32:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:50.054 Nvme0n1 00:17:50.054 09:32:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:17:50.054 09:32:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:17:50.990 09:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:17:50.990 09:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:51.248 09:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:51.507 09:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:17:51.508 09:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80120 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:51.508 09:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80206 00:17:51.508 09:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:58.097 09:32:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:58.097 09:32:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:58.097 09:32:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:17:58.097 09:32:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:58.097 Attaching 4 probes... 00:17:58.097 @path[10.0.0.3, 4421]: 14902 00:17:58.097 @path[10.0.0.3, 4421]: 15586 00:17:58.097 @path[10.0.0.3, 4421]: 15416 00:17:58.097 @path[10.0.0.3, 4421]: 15404 00:17:58.097 @path[10.0.0.3, 4421]: 15344 00:17:58.097 09:32:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:58.097 09:32:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:17:58.097 09:32:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:58.097 09:32:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:17:58.097 09:32:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:58.097 09:32:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:58.097 09:32:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80206 00:17:58.097 09:32:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:58.097 09:32:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:17:58.097 09:32:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:58.097 09:32:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:17:58.356 09:32:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:17:58.356 09:32:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80325 00:17:58.356 09:32:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80120 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:58.356 09:32:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:04.924 09:32:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:04.924 09:32:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:04.924 09:32:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:18:04.924 09:32:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:04.924 Attaching 4 probes... 00:18:04.924 @path[10.0.0.3, 4420]: 19718 00:18:04.924 @path[10.0.0.3, 4420]: 20342 00:18:04.924 @path[10.0.0.3, 4420]: 20329 00:18:04.924 @path[10.0.0.3, 4420]: 20377 00:18:04.924 @path[10.0.0.3, 4420]: 20351 00:18:04.924 09:32:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:04.924 09:32:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:04.924 09:32:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:04.924 09:32:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:18:04.924 09:32:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:04.924 09:32:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:04.924 09:32:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80325 00:18:04.924 09:32:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:04.924 09:32:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:18:04.924 09:32:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:18:04.924 09:32:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:05.183 09:32:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:18:05.183 09:32:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80120 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:05.183 09:32:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80437 00:18:05.183 09:32:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:11.747 09:32:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:11.747 09:32:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:11.747 09:32:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:11.747 09:32:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:11.747 Attaching 4 probes... 00:18:11.747 @path[10.0.0.3, 4421]: 14964 00:18:11.747 @path[10.0.0.3, 4421]: 19848 00:18:11.747 @path[10.0.0.3, 4421]: 19881 00:18:11.747 @path[10.0.0.3, 4421]: 19913 00:18:11.747 @path[10.0.0.3, 4421]: 20059 00:18:11.747 09:32:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:11.747 09:32:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:11.747 09:32:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:11.747 09:32:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:11.747 09:32:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:11.747 09:32:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:11.747 09:32:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80437 00:18:11.747 09:32:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:11.747 09:32:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:18:11.747 09:32:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:18:11.748 09:32:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:18:12.006 09:32:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:18:12.006 09:32:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80554 00:18:12.006 09:32:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80120 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:12.006 09:32:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:18.571 09:32:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:18.571 09:32:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:18:18.571 09:32:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:18:18.571 09:32:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:18.571 Attaching 4 probes... 00:18:18.571 00:18:18.571 00:18:18.571 00:18:18.571 00:18:18.571 00:18:18.571 09:32:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:18.571 09:32:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:18.571 09:32:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:18.571 09:32:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:18:18.571 09:32:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:18:18.571 09:32:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:18:18.571 09:32:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80554 00:18:18.571 09:32:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:18.571 09:32:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:18:18.571 09:32:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:18.571 09:32:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:18.830 09:32:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:18:18.830 09:32:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80668 00:18:18.830 09:32:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80120 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:18.830 09:32:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:25.396 09:32:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:25.396 09:32:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:25.396 09:32:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:25.396 09:32:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:25.396 Attaching 4 probes... 00:18:25.396 @path[10.0.0.3, 4421]: 19287 00:18:25.396 @path[10.0.0.3, 4421]: 19872 00:18:25.396 @path[10.0.0.3, 4421]: 19528 00:18:25.396 @path[10.0.0.3, 4421]: 19616 00:18:25.396 @path[10.0.0.3, 4421]: 19622 00:18:25.396 09:32:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:25.396 09:32:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:25.396 09:32:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:25.396 09:32:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:25.396 09:32:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:25.396 09:32:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:25.396 09:32:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80668 00:18:25.396 09:32:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:25.396 09:32:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:25.396 09:32:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:18:26.332 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:18:26.332 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80791 00:18:26.332 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80120 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:26.332 09:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:32.910 09:32:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:32.910 09:32:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:32.910 09:32:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:18:32.910 09:32:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:32.910 Attaching 4 probes... 00:18:32.910 @path[10.0.0.3, 4420]: 19335 00:18:32.910 @path[10.0.0.3, 4420]: 19562 00:18:32.910 @path[10.0.0.3, 4420]: 19637 00:18:32.910 @path[10.0.0.3, 4420]: 19762 00:18:32.910 @path[10.0.0.3, 4420]: 19856 00:18:32.910 09:32:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:32.910 09:32:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:32.910 09:32:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:32.910 09:32:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:18:32.910 09:32:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:32.910 09:32:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:32.910 09:32:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80791 00:18:32.910 09:32:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:32.910 09:32:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:32.910 [2024-10-16 09:32:57.213341] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:32.910 09:32:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:33.169 09:32:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:18:39.833 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:18:39.833 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80970 00:18:39.833 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80120 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:39.833 09:33:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:46.431 09:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:46.431 09:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:46.431 09:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:46.431 09:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:46.431 Attaching 4 probes... 00:18:46.431 @path[10.0.0.3, 4421]: 19194 00:18:46.431 @path[10.0.0.3, 4421]: 19660 00:18:46.431 @path[10.0.0.3, 4421]: 19750 00:18:46.431 @path[10.0.0.3, 4421]: 19823 00:18:46.431 @path[10.0.0.3, 4421]: 19708 00:18:46.431 09:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:46.431 09:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:46.431 09:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:46.432 09:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:46.432 09:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:46.432 09:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:46.432 09:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80970 00:18:46.432 09:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:46.432 09:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 80168 00:18:46.432 09:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 80168 ']' 00:18:46.432 09:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 80168 00:18:46.432 09:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:18:46.432 09:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:46.432 09:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80168 00:18:46.432 killing process with pid 80168 00:18:46.432 09:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:46.432 09:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:46.432 09:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80168' 00:18:46.432 09:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 80168 00:18:46.432 09:33:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 80168 00:18:46.432 { 00:18:46.432 "results": [ 00:18:46.432 { 00:18:46.432 "job": "Nvme0n1", 00:18:46.432 "core_mask": "0x4", 00:18:46.432 "workload": "verify", 00:18:46.432 "status": "terminated", 00:18:46.432 "verify_range": { 00:18:46.432 "start": 0, 00:18:46.432 "length": 16384 00:18:46.432 }, 00:18:46.432 "queue_depth": 128, 00:18:46.432 "io_size": 4096, 00:18:46.432 "runtime": 55.460342, 00:18:46.432 "iops": 8109.993263294337, 00:18:46.432 "mibps": 31.679661184743505, 00:18:46.432 "io_failed": 0, 00:18:46.432 "io_timeout": 0, 00:18:46.432 "avg_latency_us": 15753.403604105657, 00:18:46.432 "min_latency_us": 363.05454545454546, 00:18:46.432 "max_latency_us": 7046430.72 00:18:46.432 } 00:18:46.432 ], 00:18:46.432 "core_count": 1 00:18:46.432 } 00:18:46.432 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 80168 00:18:46.432 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:46.432 [2024-10-16 09:32:13.058185] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:18:46.432 [2024-10-16 09:32:13.058257] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80168 ] 00:18:46.432 [2024-10-16 09:32:13.190598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.432 [2024-10-16 09:32:13.244378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:46.432 [2024-10-16 09:32:13.300683] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:46.432 Running I/O for 90 seconds... 00:18:46.432 7956.00 IOPS, 31.08 MiB/s [2024-10-16T09:33:10.836Z] 7719.00 IOPS, 30.15 MiB/s [2024-10-16T09:33:10.836Z] 7701.00 IOPS, 30.08 MiB/s [2024-10-16T09:33:10.836Z] 7727.75 IOPS, 30.19 MiB/s [2024-10-16T09:33:10.836Z] 7718.20 IOPS, 30.15 MiB/s [2024-10-16T09:33:10.836Z] 7733.00 IOPS, 30.21 MiB/s [2024-10-16T09:33:10.836Z] 7725.43 IOPS, 30.18 MiB/s [2024-10-16T09:33:10.836Z] 7703.75 IOPS, 30.09 MiB/s [2024-10-16T09:33:10.836Z] [2024-10-16 09:32:22.671803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:117584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.432 [2024-10-16 09:32:22.671859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:46.432 [2024-10-16 09:32:22.671921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:117592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.432 [2024-10-16 09:32:22.671940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:46.432 [2024-10-16 09:32:22.671960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:117600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.432 [2024-10-16 09:32:22.671974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:46.432 [2024-10-16 09:32:22.671993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:117608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.432 [2024-10-16 09:32:22.672007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:46.432 [2024-10-16 09:32:22.672025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:117616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.432 [2024-10-16 09:32:22.672039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:46.432 [2024-10-16 09:32:22.672057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:117624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.432 [2024-10-16 09:32:22.672071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:46.432 [2024-10-16 09:32:22.672089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:117632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.432 [2024-10-16 09:32:22.672103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:46.432 [2024-10-16 09:32:22.672121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:117640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.432 [2024-10-16 09:32:22.672135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:46.432 [2024-10-16 09:32:22.672153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:117072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.432 [2024-10-16 09:32:22.672166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:46.432 [2024-10-16 09:32:22.672185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:117080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.432 [2024-10-16 09:32:22.672223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:46.432 [2024-10-16 09:32:22.672244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:117088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.432 [2024-10-16 09:32:22.672261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:46.432 [2024-10-16 09:32:22.672279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:117096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.432 [2024-10-16 09:32:22.672293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:46.432 [2024-10-16 09:32:22.672311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:117104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.432 [2024-10-16 09:32:22.672324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:46.432 [2024-10-16 09:32:22.672342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:117112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.432 [2024-10-16 09:32:22.672355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:46.432 [2024-10-16 09:32:22.672374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:117120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.432 [2024-10-16 09:32:22.672387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:46.432 [2024-10-16 09:32:22.672406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:117128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.432 [2024-10-16 09:32:22.672419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:46.432 [2024-10-16 09:32:22.672437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:117136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.432 [2024-10-16 09:32:22.672450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:46.432 [2024-10-16 09:32:22.672470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:117144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.432 [2024-10-16 09:32:22.672484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:46.432 [2024-10-16 09:32:22.672502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:117152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.432 [2024-10-16 09:32:22.672516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.432 [2024-10-16 09:32:22.672534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:117160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.432 [2024-10-16 09:32:22.672547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.432 [2024-10-16 09:32:22.672580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:117168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.432 [2024-10-16 09:32:22.672595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.432 [2024-10-16 09:32:22.672613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:117176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.433 [2024-10-16 09:32:22.672634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:46.433 [2024-10-16 09:32:22.672655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:117184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.433 [2024-10-16 09:32:22.672668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:46.433 [2024-10-16 09:32:22.672687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:117192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.433 [2024-10-16 09:32:22.672701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:46.433 [2024-10-16 09:32:22.672724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:117648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.433 [2024-10-16 09:32:22.672739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:46.433 [2024-10-16 09:32:22.672758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:117656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.433 [2024-10-16 09:32:22.672771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:46.433 [2024-10-16 09:32:22.672790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:117664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.433 [2024-10-16 09:32:22.672803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:46.433 [2024-10-16 09:32:22.672821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:117672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.433 [2024-10-16 09:32:22.672835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:46.433 [2024-10-16 09:32:22.672853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:117680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.433 [2024-10-16 09:32:22.672866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:46.433 [2024-10-16 09:32:22.672885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:117688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.433 [2024-10-16 09:32:22.672898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:46.433 [2024-10-16 09:32:22.672916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:117696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.433 [2024-10-16 09:32:22.672929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:46.433 [2024-10-16 09:32:22.672948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:117704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.433 [2024-10-16 09:32:22.672961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:46.433 [2024-10-16 09:32:22.672979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:117200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.433 [2024-10-16 09:32:22.672992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:46.433 [2024-10-16 09:32:22.673013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:117208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.433 [2024-10-16 09:32:22.673028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:46.433 [2024-10-16 09:32:22.673054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:117216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.433 [2024-10-16 09:32:22.673068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:46.433 [2024-10-16 09:32:22.673087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:117224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.433 [2024-10-16 09:32:22.673100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:46.433 [2024-10-16 09:32:22.673118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:117232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.433 [2024-10-16 09:32:22.673174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:46.433 [2024-10-16 09:32:22.673195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:117240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.433 [2024-10-16 09:32:22.673209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:46.433 [2024-10-16 09:32:22.673228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:117248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.433 [2024-10-16 09:32:22.673242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:46.433 [2024-10-16 09:32:22.673262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:117256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.433 [2024-10-16 09:32:22.673276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:46.433 [2024-10-16 09:32:22.673295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:117264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.433 [2024-10-16 09:32:22.673309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:46.433 [2024-10-16 09:32:22.673329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:117272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.433 [2024-10-16 09:32:22.673343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:46.433 [2024-10-16 09:32:22.673362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:117280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.433 [2024-10-16 09:32:22.673376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:46.433 [2024-10-16 09:32:22.673395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:117288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.433 [2024-10-16 09:32:22.673409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:46.433 [2024-10-16 09:32:22.673429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:117296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.433 [2024-10-16 09:32:22.673443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:46.433 [2024-10-16 09:32:22.673473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:117304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.433 [2024-10-16 09:32:22.673486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:46.433 [2024-10-16 09:32:22.673512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:117312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.433 [2024-10-16 09:32:22.673534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:46.433 [2024-10-16 09:32:22.673597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:117320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.433 [2024-10-16 09:32:22.673629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:46.433 [2024-10-16 09:32:22.673654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:117328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.433 [2024-10-16 09:32:22.673669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:46.433 [2024-10-16 09:32:22.673690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:117336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.433 [2024-10-16 09:32:22.673704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:46.433 [2024-10-16 09:32:22.673724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:117344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.433 [2024-10-16 09:32:22.673738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:46.433 [2024-10-16 09:32:22.673757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:117352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.433 [2024-10-16 09:32:22.673771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:46.433 [2024-10-16 09:32:22.673790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:117360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.433 [2024-10-16 09:32:22.673805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.433 [2024-10-16 09:32:22.673824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:117368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.433 [2024-10-16 09:32:22.673838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:46.433 [2024-10-16 09:32:22.673857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:117376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.433 [2024-10-16 09:32:22.673871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:46.433 [2024-10-16 09:32:22.673891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:117384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.433 [2024-10-16 09:32:22.673905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:46.433 [2024-10-16 09:32:22.673929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:117712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.433 [2024-10-16 09:32:22.673945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:46.433 [2024-10-16 09:32:22.673979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:117720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.433 [2024-10-16 09:32:22.673993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:46.433 [2024-10-16 09:32:22.674020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:117728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.433 [2024-10-16 09:32:22.674035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:46.433 [2024-10-16 09:32:22.674054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:117736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.434 [2024-10-16 09:32:22.674068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:46.434 [2024-10-16 09:32:22.674086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:117744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.434 [2024-10-16 09:32:22.674100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:46.434 [2024-10-16 09:32:22.674119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:117752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.434 [2024-10-16 09:32:22.674132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:46.434 [2024-10-16 09:32:22.674151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:117760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.434 [2024-10-16 09:32:22.674165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:46.434 [2024-10-16 09:32:22.674184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:117768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.434 [2024-10-16 09:32:22.674197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:46.434 [2024-10-16 09:32:22.674217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:117776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.434 [2024-10-16 09:32:22.674230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:46.434 [2024-10-16 09:32:22.674250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:117784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.434 [2024-10-16 09:32:22.674264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:46.434 [2024-10-16 09:32:22.674283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:117792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.434 [2024-10-16 09:32:22.674296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:46.434 [2024-10-16 09:32:22.674315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:117800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.434 [2024-10-16 09:32:22.674328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:46.434 [2024-10-16 09:32:22.674347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:117808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.434 [2024-10-16 09:32:22.674361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:46.434 [2024-10-16 09:32:22.674380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:117816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.434 [2024-10-16 09:32:22.674393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:46.434 [2024-10-16 09:32:22.674412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:117824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.434 [2024-10-16 09:32:22.674431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:46.434 [2024-10-16 09:32:22.674451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:117832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.434 [2024-10-16 09:32:22.674466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:46.434 [2024-10-16 09:32:22.674485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:117392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.434 [2024-10-16 09:32:22.674499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:46.434 [2024-10-16 09:32:22.674518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:117400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.434 [2024-10-16 09:32:22.674531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:46.434 [2024-10-16 09:32:22.674550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:117408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.434 [2024-10-16 09:32:22.674578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:46.434 [2024-10-16 09:32:22.674598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:117416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.434 [2024-10-16 09:32:22.674613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:46.434 [2024-10-16 09:32:22.674632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:117424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.434 [2024-10-16 09:32:22.674645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:46.434 [2024-10-16 09:32:22.674664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:117432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.434 [2024-10-16 09:32:22.674678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:46.434 [2024-10-16 09:32:22.674697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:117440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.434 [2024-10-16 09:32:22.674711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:46.434 [2024-10-16 09:32:22.674729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:117448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.434 [2024-10-16 09:32:22.674743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:46.434 [2024-10-16 09:32:22.674762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:117456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.434 [2024-10-16 09:32:22.674775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:46.434 [2024-10-16 09:32:22.674795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:117464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.434 [2024-10-16 09:32:22.674809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:46.434 [2024-10-16 09:32:22.674828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:117472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.434 [2024-10-16 09:32:22.674848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:46.434 [2024-10-16 09:32:22.674869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:117480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.434 [2024-10-16 09:32:22.674882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:46.434 [2024-10-16 09:32:22.674901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:117488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.434 [2024-10-16 09:32:22.674915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.434 [2024-10-16 09:32:22.674934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:117496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.434 [2024-10-16 09:32:22.674947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:46.434 [2024-10-16 09:32:22.674966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:117504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.434 [2024-10-16 09:32:22.674980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:46.434 [2024-10-16 09:32:22.674998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:117512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.434 [2024-10-16 09:32:22.675012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:46.434 [2024-10-16 09:32:22.675031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:117840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.434 [2024-10-16 09:32:22.675044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:46.434 [2024-10-16 09:32:22.675063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.434 [2024-10-16 09:32:22.675076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:46.434 [2024-10-16 09:32:22.675095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:117856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.434 [2024-10-16 09:32:22.675108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:46.434 [2024-10-16 09:32:22.675127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:117864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.434 [2024-10-16 09:32:22.675141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:46.434 [2024-10-16 09:32:22.675160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:117872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.434 [2024-10-16 09:32:22.675173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:46.434 [2024-10-16 09:32:22.675191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:117880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.434 [2024-10-16 09:32:22.675205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:46.434 [2024-10-16 09:32:22.675224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:117888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.434 [2024-10-16 09:32:22.675238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:46.434 [2024-10-16 09:32:22.675263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:117896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.434 [2024-10-16 09:32:22.675277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:46.434 [2024-10-16 09:32:22.675296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:117520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.434 [2024-10-16 09:32:22.675310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:46.435 [2024-10-16 09:32:22.675339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:117528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.435 [2024-10-16 09:32:22.675353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:46.435 [2024-10-16 09:32:22.675372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:117536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.435 [2024-10-16 09:32:22.675386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:46.435 [2024-10-16 09:32:22.675405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:117544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.435 [2024-10-16 09:32:22.675419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:46.435 [2024-10-16 09:32:22.675438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:117552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.435 [2024-10-16 09:32:22.675451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:46.435 [2024-10-16 09:32:22.675470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:117560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.435 [2024-10-16 09:32:22.675484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:46.435 [2024-10-16 09:32:22.675503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:117568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.435 [2024-10-16 09:32:22.675517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:46.435 [2024-10-16 09:32:22.676870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:117576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.435 [2024-10-16 09:32:22.676900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:46.435 [2024-10-16 09:32:22.676927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:117904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.435 [2024-10-16 09:32:22.676944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:46.435 [2024-10-16 09:32:22.676964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:117912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.435 [2024-10-16 09:32:22.676978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:46.435 [2024-10-16 09:32:22.676997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:117920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.435 [2024-10-16 09:32:22.677011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:46.435 [2024-10-16 09:32:22.677045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:117928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.435 [2024-10-16 09:32:22.677060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:46.435 [2024-10-16 09:32:22.677080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:117936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.435 [2024-10-16 09:32:22.677093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:46.435 [2024-10-16 09:32:22.677112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:117944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.435 [2024-10-16 09:32:22.677153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:46.435 [2024-10-16 09:32:22.677192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:117952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.435 [2024-10-16 09:32:22.677209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:46.435 [2024-10-16 09:32:22.677361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:117960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.435 [2024-10-16 09:32:22.677387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:46.435 [2024-10-16 09:32:22.677413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:117968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.435 [2024-10-16 09:32:22.677429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:46.435 [2024-10-16 09:32:22.677457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:117976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.435 [2024-10-16 09:32:22.677487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:46.435 [2024-10-16 09:32:22.677507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:117984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.435 [2024-10-16 09:32:22.677521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:46.435 [2024-10-16 09:32:22.677540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:117992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.435 [2024-10-16 09:32:22.677554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:46.435 [2024-10-16 09:32:22.677574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.435 [2024-10-16 09:32:22.677616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.435 [2024-10-16 09:32:22.677638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:118008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.435 [2024-10-16 09:32:22.677652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:46.435 [2024-10-16 09:32:22.677671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:118016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.435 [2024-10-16 09:32:22.677686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:46.435 [2024-10-16 09:32:22.677718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:118024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.435 [2024-10-16 09:32:22.677735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:46.435 [2024-10-16 09:32:22.677755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:118032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.435 [2024-10-16 09:32:22.677768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:46.435 [2024-10-16 09:32:22.677787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:118040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.435 [2024-10-16 09:32:22.677801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:46.435 [2024-10-16 09:32:22.677820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:118048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.435 [2024-10-16 09:32:22.677834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:46.435 [2024-10-16 09:32:22.677852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:118056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.435 [2024-10-16 09:32:22.677865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:46.435 [2024-10-16 09:32:22.677884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:118064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.435 [2024-10-16 09:32:22.677897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:46.435 [2024-10-16 09:32:22.677916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:118072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.435 [2024-10-16 09:32:22.677929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:46.435 [2024-10-16 09:32:22.677948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:118080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.435 [2024-10-16 09:32:22.677962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:46.435 [2024-10-16 09:32:22.677994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:118088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.435 [2024-10-16 09:32:22.678012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:46.435 7871.44 IOPS, 30.75 MiB/s [2024-10-16T09:33:10.839Z] 8106.70 IOPS, 31.67 MiB/s [2024-10-16T09:33:10.839Z] 8300.64 IOPS, 32.42 MiB/s [2024-10-16T09:33:10.839Z] 8457.58 IOPS, 33.04 MiB/s [2024-10-16T09:33:10.839Z] 8590.38 IOPS, 33.56 MiB/s [2024-10-16T09:33:10.839Z] 8703.64 IOPS, 34.00 MiB/s [2024-10-16T09:33:10.839Z] [2024-10-16 09:32:29.215873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:119896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.435 [2024-10-16 09:32:29.215926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:46.435 [2024-10-16 09:32:29.215988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:119904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.435 [2024-10-16 09:32:29.216007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:46.435 [2024-10-16 09:32:29.216028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:119912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.435 [2024-10-16 09:32:29.216043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:46.435 [2024-10-16 09:32:29.216084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:119920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.435 [2024-10-16 09:32:29.216099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:46.435 [2024-10-16 09:32:29.216118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:119928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.435 [2024-10-16 09:32:29.216131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:46.435 [2024-10-16 09:32:29.216149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:119936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.436 [2024-10-16 09:32:29.216163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:46.436 [2024-10-16 09:32:29.216182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:119944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.436 [2024-10-16 09:32:29.216195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:46.436 [2024-10-16 09:32:29.216214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:119952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.436 [2024-10-16 09:32:29.216226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:46.436 [2024-10-16 09:32:29.216245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:119576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.436 [2024-10-16 09:32:29.216267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:46.436 [2024-10-16 09:32:29.216288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:119584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.436 [2024-10-16 09:32:29.216301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:46.436 [2024-10-16 09:32:29.216319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:119592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.436 [2024-10-16 09:32:29.216332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:46.436 [2024-10-16 09:32:29.216352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:119600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.436 [2024-10-16 09:32:29.216381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:46.436 [2024-10-16 09:32:29.216401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:119608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.436 [2024-10-16 09:32:29.216414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:46.436 [2024-10-16 09:32:29.216433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:119616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.436 [2024-10-16 09:32:29.216446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:46.436 [2024-10-16 09:32:29.216466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:119624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.436 [2024-10-16 09:32:29.216479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:46.436 [2024-10-16 09:32:29.216507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:119632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.436 [2024-10-16 09:32:29.216521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:46.436 [2024-10-16 09:32:29.216754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:119960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.436 [2024-10-16 09:32:29.216776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:46.436 [2024-10-16 09:32:29.216797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:119968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.436 [2024-10-16 09:32:29.216811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:46.436 [2024-10-16 09:32:29.216831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:119976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.436 [2024-10-16 09:32:29.216845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:46.436 [2024-10-16 09:32:29.216864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:119984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.436 [2024-10-16 09:32:29.216877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:46.436 [2024-10-16 09:32:29.216897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:119992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.436 [2024-10-16 09:32:29.216911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:46.436 [2024-10-16 09:32:29.216930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:120000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.436 [2024-10-16 09:32:29.216943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:46.436 [2024-10-16 09:32:29.216962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:120008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.436 [2024-10-16 09:32:29.216975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:46.436 [2024-10-16 09:32:29.216995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:120016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.436 [2024-10-16 09:32:29.217009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:46.436 [2024-10-16 09:32:29.217033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:120024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.436 [2024-10-16 09:32:29.217049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:46.436 [2024-10-16 09:32:29.217069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:120032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.436 [2024-10-16 09:32:29.217083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:46.436 [2024-10-16 09:32:29.217102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:120040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.436 [2024-10-16 09:32:29.217158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:46.436 [2024-10-16 09:32:29.217190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:120048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.436 [2024-10-16 09:32:29.217220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:46.436 [2024-10-16 09:32:29.217243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:120056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.436 [2024-10-16 09:32:29.217258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:46.436 [2024-10-16 09:32:29.217279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:120064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.436 [2024-10-16 09:32:29.217294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:46.436 [2024-10-16 09:32:29.217315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:120072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.436 [2024-10-16 09:32:29.217329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.436 [2024-10-16 09:32:29.217350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:120080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.436 [2024-10-16 09:32:29.217365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:46.436 [2024-10-16 09:32:29.217386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:120088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.436 [2024-10-16 09:32:29.217401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:46.436 [2024-10-16 09:32:29.217422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:120096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.436 [2024-10-16 09:32:29.217452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:46.436 [2024-10-16 09:32:29.217473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:120104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.436 [2024-10-16 09:32:29.217487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:46.436 [2024-10-16 09:32:29.217507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:120112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.436 [2024-10-16 09:32:29.217520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:46.436 [2024-10-16 09:32:29.217540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:120120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.437 [2024-10-16 09:32:29.217569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:46.437 [2024-10-16 09:32:29.217588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:120128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.437 [2024-10-16 09:32:29.217613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:46.437 [2024-10-16 09:32:29.217634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:120136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.437 [2024-10-16 09:32:29.217648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:46.437 [2024-10-16 09:32:29.217668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:120144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.437 [2024-10-16 09:32:29.217692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:46.437 [2024-10-16 09:32:29.217713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:119640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.437 [2024-10-16 09:32:29.217726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:46.437 [2024-10-16 09:32:29.217745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:119648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.437 [2024-10-16 09:32:29.217759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:46.437 [2024-10-16 09:32:29.217778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:119656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.437 [2024-10-16 09:32:29.217792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:46.437 [2024-10-16 09:32:29.217811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:119664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.437 [2024-10-16 09:32:29.217825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:46.437 [2024-10-16 09:32:29.217844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:119672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.437 [2024-10-16 09:32:29.217858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:46.437 [2024-10-16 09:32:29.217877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:119680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.437 [2024-10-16 09:32:29.217890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:46.437 [2024-10-16 09:32:29.217909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.437 [2024-10-16 09:32:29.217922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:46.437 [2024-10-16 09:32:29.217942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:119696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.437 [2024-10-16 09:32:29.217955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:46.437 [2024-10-16 09:32:29.217974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:120152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.437 [2024-10-16 09:32:29.217987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:46.437 [2024-10-16 09:32:29.218006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:120160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.437 [2024-10-16 09:32:29.218020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:46.437 [2024-10-16 09:32:29.218040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:120168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.437 [2024-10-16 09:32:29.218053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:46.437 [2024-10-16 09:32:29.218072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:120176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.437 [2024-10-16 09:32:29.218091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:46.437 [2024-10-16 09:32:29.218111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:120184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.437 [2024-10-16 09:32:29.218125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:46.437 [2024-10-16 09:32:29.218144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:120192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.437 [2024-10-16 09:32:29.218157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:46.437 [2024-10-16 09:32:29.218177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:120200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.437 [2024-10-16 09:32:29.218190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:46.437 [2024-10-16 09:32:29.218209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:120208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.437 [2024-10-16 09:32:29.218223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:46.437 [2024-10-16 09:32:29.218242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:120216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.437 [2024-10-16 09:32:29.218255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:46.437 [2024-10-16 09:32:29.218274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:120224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.437 [2024-10-16 09:32:29.218288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:46.437 [2024-10-16 09:32:29.218307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:120232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.437 [2024-10-16 09:32:29.218320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:46.437 [2024-10-16 09:32:29.218339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:120240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.437 [2024-10-16 09:32:29.218353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:46.437 [2024-10-16 09:32:29.218372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:120248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.437 [2024-10-16 09:32:29.218386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:46.437 [2024-10-16 09:32:29.218405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:120256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.437 [2024-10-16 09:32:29.218418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:46.437 [2024-10-16 09:32:29.218437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:120264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.437 [2024-10-16 09:32:29.218451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.437 [2024-10-16 09:32:29.218471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:120272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.437 [2024-10-16 09:32:29.218484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:46.437 [2024-10-16 09:32:29.218511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:119704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.437 [2024-10-16 09:32:29.218525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:46.437 [2024-10-16 09:32:29.218571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:119712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.437 [2024-10-16 09:32:29.218587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:46.437 [2024-10-16 09:32:29.218608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:119720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.437 [2024-10-16 09:32:29.218622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:46.437 [2024-10-16 09:32:29.218642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:119728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.437 [2024-10-16 09:32:29.218656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:46.437 [2024-10-16 09:32:29.218676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:119736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.437 [2024-10-16 09:32:29.218689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:46.437 [2024-10-16 09:32:29.218709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:119744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.437 [2024-10-16 09:32:29.218723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:46.437 [2024-10-16 09:32:29.218742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:119752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.437 [2024-10-16 09:32:29.218756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:46.437 [2024-10-16 09:32:29.218776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:119760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.437 [2024-10-16 09:32:29.218790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:46.437 [2024-10-16 09:32:29.218835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:120280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.437 [2024-10-16 09:32:29.218854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:46.437 [2024-10-16 09:32:29.218892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:120288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.437 [2024-10-16 09:32:29.218907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:46.437 [2024-10-16 09:32:29.218928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:120296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.437 [2024-10-16 09:32:29.218942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:46.437 [2024-10-16 09:32:29.218978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:120304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.438 [2024-10-16 09:32:29.218992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:46.438 [2024-10-16 09:32:29.219023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:120312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.438 [2024-10-16 09:32:29.219038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:46.438 [2024-10-16 09:32:29.219058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:120320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.438 [2024-10-16 09:32:29.219071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:46.438 [2024-10-16 09:32:29.219091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:120328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.438 [2024-10-16 09:32:29.219105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:46.438 [2024-10-16 09:32:29.219124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.438 [2024-10-16 09:32:29.219138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:46.438 [2024-10-16 09:32:29.219157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:120344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.438 [2024-10-16 09:32:29.219171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:46.438 [2024-10-16 09:32:29.219191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:120352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.438 [2024-10-16 09:32:29.219205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:46.438 [2024-10-16 09:32:29.219225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:120360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.438 [2024-10-16 09:32:29.219238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:46.438 [2024-10-16 09:32:29.219258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:120368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.438 [2024-10-16 09:32:29.219273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:46.438 [2024-10-16 09:32:29.219292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:120376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.438 [2024-10-16 09:32:29.219306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:46.438 [2024-10-16 09:32:29.219326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:120384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.438 [2024-10-16 09:32:29.219339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:46.438 [2024-10-16 09:32:29.219359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:120392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.438 [2024-10-16 09:32:29.219373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:46.438 [2024-10-16 09:32:29.219392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:120400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.438 [2024-10-16 09:32:29.219406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:46.438 [2024-10-16 09:32:29.219432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:120408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.438 [2024-10-16 09:32:29.219447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:46.438 [2024-10-16 09:32:29.219468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:120416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.438 [2024-10-16 09:32:29.219481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:46.438 [2024-10-16 09:32:29.219501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:120424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.438 [2024-10-16 09:32:29.219514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:46.438 [2024-10-16 09:32:29.219534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:120432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.438 [2024-10-16 09:32:29.219548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:46.438 [2024-10-16 09:32:29.219568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:119768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.438 [2024-10-16 09:32:29.219581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:46.438 [2024-10-16 09:32:29.219617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:119776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.438 [2024-10-16 09:32:29.219632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:46.438 [2024-10-16 09:32:29.219652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:119784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.438 [2024-10-16 09:32:29.219666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.438 [2024-10-16 09:32:29.219686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:119792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.438 [2024-10-16 09:32:29.219699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:46.438 [2024-10-16 09:32:29.219718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:119800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.438 [2024-10-16 09:32:29.219732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:46.438 [2024-10-16 09:32:29.219752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:119808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.438 [2024-10-16 09:32:29.219766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:46.438 [2024-10-16 09:32:29.219786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:119816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.438 [2024-10-16 09:32:29.219800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:46.438 [2024-10-16 09:32:29.219820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:119824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.438 [2024-10-16 09:32:29.219833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:46.438 [2024-10-16 09:32:29.219853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:119832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.438 [2024-10-16 09:32:29.219874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:46.438 [2024-10-16 09:32:29.219894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:119840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.438 [2024-10-16 09:32:29.219908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:46.438 [2024-10-16 09:32:29.219928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:119848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.438 [2024-10-16 09:32:29.219942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:46.438 [2024-10-16 09:32:29.219962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:119856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.438 [2024-10-16 09:32:29.219975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:46.438 [2024-10-16 09:32:29.219995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:119864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.438 [2024-10-16 09:32:29.220009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:46.438 [2024-10-16 09:32:29.220029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:119872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.438 [2024-10-16 09:32:29.220044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:46.438 [2024-10-16 09:32:29.220064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:119880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.438 [2024-10-16 09:32:29.220078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:46.438 [2024-10-16 09:32:29.220774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:119888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.438 [2024-10-16 09:32:29.220802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:46.438 [2024-10-16 09:32:29.220835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:120440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.438 [2024-10-16 09:32:29.220851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:46.438 [2024-10-16 09:32:29.220878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:120448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.438 [2024-10-16 09:32:29.220894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:46.438 [2024-10-16 09:32:29.220921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:120456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.438 [2024-10-16 09:32:29.220936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:46.438 [2024-10-16 09:32:29.220962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:120464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.438 [2024-10-16 09:32:29.221009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:46.438 [2024-10-16 09:32:29.221038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:120472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.438 [2024-10-16 09:32:29.221064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:46.438 [2024-10-16 09:32:29.221095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:120480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.438 [2024-10-16 09:32:29.221122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:46.439 [2024-10-16 09:32:29.221162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:120488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.439 [2024-10-16 09:32:29.221179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:46.439 [2024-10-16 09:32:29.221223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:120496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.439 [2024-10-16 09:32:29.221244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:46.439 [2024-10-16 09:32:29.221273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:120504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.439 [2024-10-16 09:32:29.221289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:46.439 [2024-10-16 09:32:29.221320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:120512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.439 [2024-10-16 09:32:29.221336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:46.439 [2024-10-16 09:32:29.221364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:120520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.439 [2024-10-16 09:32:29.221379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:46.439 [2024-10-16 09:32:29.221407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:120528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.439 [2024-10-16 09:32:29.221422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:46.439 [2024-10-16 09:32:29.221451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:120536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.439 [2024-10-16 09:32:29.221466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:46.439 [2024-10-16 09:32:29.221495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:120544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.439 [2024-10-16 09:32:29.221510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:46.439 8642.33 IOPS, 33.76 MiB/s [2024-10-16T09:33:10.843Z] 8227.12 IOPS, 32.14 MiB/s [2024-10-16T09:33:10.843Z] 8329.53 IOPS, 32.54 MiB/s [2024-10-16T09:33:10.843Z] 8418.78 IOPS, 32.89 MiB/s [2024-10-16T09:33:10.843Z] 8498.21 IOPS, 33.20 MiB/s [2024-10-16T09:33:10.843Z] 8572.10 IOPS, 33.48 MiB/s [2024-10-16T09:33:10.843Z] 8637.05 IOPS, 33.74 MiB/s [2024-10-16T09:33:10.843Z] [2024-10-16 09:32:36.299619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.439 [2024-10-16 09:32:36.299674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:46.439 [2024-10-16 09:32:36.299736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.439 [2024-10-16 09:32:36.299755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:46.439 [2024-10-16 09:32:36.299794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.439 [2024-10-16 09:32:36.299810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:46.439 [2024-10-16 09:32:36.299829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:76264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.439 [2024-10-16 09:32:36.299843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:46.439 [2024-10-16 09:32:36.299861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:76272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.439 [2024-10-16 09:32:36.299874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:46.439 [2024-10-16 09:32:36.299893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:76280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.439 [2024-10-16 09:32:36.299906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:46.439 [2024-10-16 09:32:36.299925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:76288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.439 [2024-10-16 09:32:36.299937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:46.439 [2024-10-16 09:32:36.299956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:76296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.439 [2024-10-16 09:32:36.299969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:46.439 [2024-10-16 09:32:36.299992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.439 [2024-10-16 09:32:36.300007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:46.439 [2024-10-16 09:32:36.300026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.439 [2024-10-16 09:32:36.300047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:46.439 [2024-10-16 09:32:36.300066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.439 [2024-10-16 09:32:36.300079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:46.439 [2024-10-16 09:32:36.300097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:76328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.439 [2024-10-16 09:32:36.300110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:46.439 [2024-10-16 09:32:36.300128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:76336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.439 [2024-10-16 09:32:36.300141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:46.439 [2024-10-16 09:32:36.300159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.439 [2024-10-16 09:32:36.300172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:46.439 [2024-10-16 09:32:36.300190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:76352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.439 [2024-10-16 09:32:36.300211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:46.439 [2024-10-16 09:32:36.300231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:76360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.439 [2024-10-16 09:32:36.300245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:46.439 [2024-10-16 09:32:36.300264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:75856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.439 [2024-10-16 09:32:36.300277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:46.439 [2024-10-16 09:32:36.300298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:75864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.439 [2024-10-16 09:32:36.300312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:46.439 [2024-10-16 09:32:36.300332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:75872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.439 [2024-10-16 09:32:36.300345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:46.439 [2024-10-16 09:32:36.300364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:75880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.439 [2024-10-16 09:32:36.300377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:46.439 [2024-10-16 09:32:36.300396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:75888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.439 [2024-10-16 09:32:36.300410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:46.439 [2024-10-16 09:32:36.300429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:75896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.439 [2024-10-16 09:32:36.300442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:46.439 [2024-10-16 09:32:36.300461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.439 [2024-10-16 09:32:36.300474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:46.439 [2024-10-16 09:32:36.300493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:75912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.439 [2024-10-16 09:32:36.300507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:46.439 [2024-10-16 09:32:36.300540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.439 [2024-10-16 09:32:36.300588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:46.439 [2024-10-16 09:32:36.300611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:76376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.439 [2024-10-16 09:32:36.300626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:46.439 [2024-10-16 09:32:36.300646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.439 [2024-10-16 09:32:36.300668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:46.439 [2024-10-16 09:32:36.300690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:76392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.440 [2024-10-16 09:32:36.300704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:46.440 [2024-10-16 09:32:36.300724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.440 [2024-10-16 09:32:36.300738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:46.440 [2024-10-16 09:32:36.300757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:76408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.440 [2024-10-16 09:32:36.300771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:46.440 [2024-10-16 09:32:36.300790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.440 [2024-10-16 09:32:36.300804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:46.440 [2024-10-16 09:32:36.300823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.440 [2024-10-16 09:32:36.300837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:46.440 [2024-10-16 09:32:36.300857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.440 [2024-10-16 09:32:36.300871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:46.440 [2024-10-16 09:32:36.300891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:76440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.440 [2024-10-16 09:32:36.300905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:46.440 [2024-10-16 09:32:36.300925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:76448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.440 [2024-10-16 09:32:36.300939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:46.440 [2024-10-16 09:32:36.300973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.440 [2024-10-16 09:32:36.300987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:46.440 [2024-10-16 09:32:36.301006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:76464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.440 [2024-10-16 09:32:36.301019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:46.440 [2024-10-16 09:32:36.301038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:76472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.440 [2024-10-16 09:32:36.301051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:46.440 [2024-10-16 09:32:36.301070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.440 [2024-10-16 09:32:36.301083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:46.440 [2024-10-16 09:32:36.301153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.440 [2024-10-16 09:32:36.301170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:46.440 [2024-10-16 09:32:36.301191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:75920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.440 [2024-10-16 09:32:36.301205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:46.440 [2024-10-16 09:32:36.301226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.440 [2024-10-16 09:32:36.301240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:46.440 [2024-10-16 09:32:36.301261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:75936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.440 [2024-10-16 09:32:36.301275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:46.440 [2024-10-16 09:32:36.301296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:75944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.440 [2024-10-16 09:32:36.301311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:46.440 [2024-10-16 09:32:36.301332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:75952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.440 [2024-10-16 09:32:36.301346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:46.440 [2024-10-16 09:32:36.301367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:75960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.440 [2024-10-16 09:32:36.301381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:46.440 [2024-10-16 09:32:36.301401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:75968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.440 [2024-10-16 09:32:36.301416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:46.440 [2024-10-16 09:32:36.301451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:75976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.440 [2024-10-16 09:32:36.301481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:46.440 [2024-10-16 09:32:36.301531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:76496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.440 [2024-10-16 09:32:36.301549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:46.440 [2024-10-16 09:32:36.301571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:76504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.440 [2024-10-16 09:32:36.301600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:46.440 [2024-10-16 09:32:36.301620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.440 [2024-10-16 09:32:36.301645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:46.440 [2024-10-16 09:32:36.301676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.440 [2024-10-16 09:32:36.301691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:46.440 [2024-10-16 09:32:36.301711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.440 [2024-10-16 09:32:36.301725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:46.440 [2024-10-16 09:32:36.301744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.440 [2024-10-16 09:32:36.301757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:46.440 [2024-10-16 09:32:36.301777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.440 [2024-10-16 09:32:36.301791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:46.440 [2024-10-16 09:32:36.301810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.440 [2024-10-16 09:32:36.301824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:46.440 [2024-10-16 09:32:36.301843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.440 [2024-10-16 09:32:36.301857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:46.440 [2024-10-16 09:32:36.301876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.440 [2024-10-16 09:32:36.301890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:46.440 [2024-10-16 09:32:36.301909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.440 [2024-10-16 09:32:36.301922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:46.440 [2024-10-16 09:32:36.301941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.440 [2024-10-16 09:32:36.301956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:46.440 [2024-10-16 09:32:36.301976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.440 [2024-10-16 09:32:36.301990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.440 [2024-10-16 09:32:36.302009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.440 [2024-10-16 09:32:36.302023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.440 [2024-10-16 09:32:36.302042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.440 [2024-10-16 09:32:36.302056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:46.440 [2024-10-16 09:32:36.302075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.440 [2024-10-16 09:32:36.302095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:46.441 [2024-10-16 09:32:36.302115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:75984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.441 [2024-10-16 09:32:36.302129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:46.441 [2024-10-16 09:32:36.302150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:75992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.441 [2024-10-16 09:32:36.302163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:46.441 [2024-10-16 09:32:36.302183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:76000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.441 [2024-10-16 09:32:36.302197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:46.441 [2024-10-16 09:32:36.302216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:76008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.441 [2024-10-16 09:32:36.302230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:46.441 [2024-10-16 09:32:36.302250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:76016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.441 [2024-10-16 09:32:36.302263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:46.441 [2024-10-16 09:32:36.302282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.441 [2024-10-16 09:32:36.302296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:46.441 [2024-10-16 09:32:36.302315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:76032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.441 [2024-10-16 09:32:36.302329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:46.441 [2024-10-16 09:32:36.302349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:76040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.441 [2024-10-16 09:32:36.302362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:46.441 [2024-10-16 09:32:36.302385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.441 [2024-10-16 09:32:36.302400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:46.441 [2024-10-16 09:32:36.302420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.441 [2024-10-16 09:32:36.302434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:46.441 [2024-10-16 09:32:36.302453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:76640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.441 [2024-10-16 09:32:36.302467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:46.441 [2024-10-16 09:32:36.302486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.441 [2024-10-16 09:32:36.302507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:46.441 [2024-10-16 09:32:36.302528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.441 [2024-10-16 09:32:36.302552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:46.441 [2024-10-16 09:32:36.302574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:76664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.441 [2024-10-16 09:32:36.302605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:46.441 [2024-10-16 09:32:36.302625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.441 [2024-10-16 09:32:36.302639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:46.441 [2024-10-16 09:32:36.302659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:76680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.441 [2024-10-16 09:32:36.302674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:46.441 [2024-10-16 09:32:36.302694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:76048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.441 [2024-10-16 09:32:36.302708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:46.441 [2024-10-16 09:32:36.302739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:76056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.441 [2024-10-16 09:32:36.302755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:46.441 [2024-10-16 09:32:36.302775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:76064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.441 [2024-10-16 09:32:36.302789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:46.441 [2024-10-16 09:32:36.302809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:76072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.441 [2024-10-16 09:32:36.302823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:46.441 [2024-10-16 09:32:36.302843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:76080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.441 [2024-10-16 09:32:36.302857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:46.441 [2024-10-16 09:32:36.302877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:76088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.441 [2024-10-16 09:32:36.302891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:46.441 [2024-10-16 09:32:36.302911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:76096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.441 [2024-10-16 09:32:36.302925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:46.441 [2024-10-16 09:32:36.302945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:76104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.441 [2024-10-16 09:32:36.302973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:46.441 [2024-10-16 09:32:36.303000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:76112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.441 [2024-10-16 09:32:36.303015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:46.441 [2024-10-16 09:32:36.303034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:76120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.441 [2024-10-16 09:32:36.303048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:46.441 [2024-10-16 09:32:36.303067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:76128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.441 [2024-10-16 09:32:36.303081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:46.441 [2024-10-16 09:32:36.303100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:76136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.441 [2024-10-16 09:32:36.303115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:46.441 [2024-10-16 09:32:36.303134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:76144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.441 [2024-10-16 09:32:36.303147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:46.441 [2024-10-16 09:32:36.303167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:76152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.441 [2024-10-16 09:32:36.303181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:46.441 [2024-10-16 09:32:36.303200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:76160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.442 [2024-10-16 09:32:36.303213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:46.442 [2024-10-16 09:32:36.303233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:76168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.442 [2024-10-16 09:32:36.303247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:46.442 [2024-10-16 09:32:36.303281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.442 [2024-10-16 09:32:36.303300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:46.442 [2024-10-16 09:32:36.303326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.442 [2024-10-16 09:32:36.303340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:46.442 [2024-10-16 09:32:36.303360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:76704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.442 [2024-10-16 09:32:36.303373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:46.442 [2024-10-16 09:32:36.303393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:76712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.442 [2024-10-16 09:32:36.303406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:46.442 [2024-10-16 09:32:36.303436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:76720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.442 [2024-10-16 09:32:36.303451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:46.442 [2024-10-16 09:32:36.303471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:76728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.442 [2024-10-16 09:32:36.303485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:46.442 [2024-10-16 09:32:36.303504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.442 [2024-10-16 09:32:36.303517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:46.442 [2024-10-16 09:32:36.303537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:76744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.442 [2024-10-16 09:32:36.303550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:46.442 [2024-10-16 09:32:36.303583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:76752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.442 [2024-10-16 09:32:36.303601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:46.442 [2024-10-16 09:32:36.303622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:76760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.442 [2024-10-16 09:32:36.303636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:46.442 [2024-10-16 09:32:36.303655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.442 [2024-10-16 09:32:36.303669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:46.442 [2024-10-16 09:32:36.303689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.442 [2024-10-16 09:32:36.303703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:46.442 [2024-10-16 09:32:36.303722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:76784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.442 [2024-10-16 09:32:36.303735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:46.442 [2024-10-16 09:32:36.303755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:76792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.442 [2024-10-16 09:32:36.303768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:46.442 [2024-10-16 09:32:36.303788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:76800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.442 [2024-10-16 09:32:36.303802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:46.442 [2024-10-16 09:32:36.303821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:76808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.442 [2024-10-16 09:32:36.303834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:46.442 [2024-10-16 09:32:36.303854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:76176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.442 [2024-10-16 09:32:36.303874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:46.442 [2024-10-16 09:32:36.303896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:76184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.442 [2024-10-16 09:32:36.303910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:46.442 [2024-10-16 09:32:36.303930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:76192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.442 [2024-10-16 09:32:36.303943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:46.442 [2024-10-16 09:32:36.303963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:76200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.442 [2024-10-16 09:32:36.303976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:46.442 [2024-10-16 09:32:36.303996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:76208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.442 [2024-10-16 09:32:36.304009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:46.442 [2024-10-16 09:32:36.304028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.442 [2024-10-16 09:32:36.304042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:46.442 [2024-10-16 09:32:36.304062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:76224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.442 [2024-10-16 09:32:36.304076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:46.442 [2024-10-16 09:32:36.304826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:76232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.442 [2024-10-16 09:32:36.304870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:46.442 [2024-10-16 09:32:36.304905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.442 [2024-10-16 09:32:36.304922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:46.442 [2024-10-16 09:32:36.304952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:76824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.442 [2024-10-16 09:32:36.304967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:46.442 [2024-10-16 09:32:36.304996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:76832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.442 [2024-10-16 09:32:36.305011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:46.442 [2024-10-16 09:32:36.305040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.442 [2024-10-16 09:32:36.305059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:46.442 [2024-10-16 09:32:36.305087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:76848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.442 [2024-10-16 09:32:36.305130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:46.442 [2024-10-16 09:32:36.305162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:76856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.442 [2024-10-16 09:32:36.305178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:46.442 [2024-10-16 09:32:36.305207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:76864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.442 [2024-10-16 09:32:36.305222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:46.442 [2024-10-16 09:32:36.305266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:76872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.442 [2024-10-16 09:32:36.305286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:46.442 8623.00 IOPS, 33.68 MiB/s [2024-10-16T09:33:10.846Z] 8248.09 IOPS, 32.22 MiB/s [2024-10-16T09:33:10.846Z] 7904.42 IOPS, 30.88 MiB/s [2024-10-16T09:33:10.846Z] 7588.24 IOPS, 29.64 MiB/s [2024-10-16T09:33:10.846Z] 7296.38 IOPS, 28.50 MiB/s [2024-10-16T09:33:10.846Z] 7026.15 IOPS, 27.45 MiB/s [2024-10-16T09:33:10.846Z] 6775.21 IOPS, 26.47 MiB/s [2024-10-16T09:33:10.846Z] 6579.07 IOPS, 25.70 MiB/s [2024-10-16T09:33:10.846Z] 6684.03 IOPS, 26.11 MiB/s [2024-10-16T09:33:10.846Z] 6788.42 IOPS, 26.52 MiB/s [2024-10-16T09:33:10.846Z] 6881.03 IOPS, 26.88 MiB/s [2024-10-16T09:33:10.846Z] 6968.52 IOPS, 27.22 MiB/s [2024-10-16T09:33:10.846Z] 7052.26 IOPS, 27.55 MiB/s [2024-10-16T09:33:10.846Z] 7128.26 IOPS, 27.84 MiB/s [2024-10-16T09:33:10.846Z] [2024-10-16 09:32:49.622407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:45128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.442 [2024-10-16 09:32:49.622457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:46.442 [2024-10-16 09:32:49.622519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:45136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.442 [2024-10-16 09:32:49.622538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:46.443 [2024-10-16 09:32:49.622570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:45144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.443 [2024-10-16 09:32:49.622587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:46.443 [2024-10-16 09:32:49.622606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.443 [2024-10-16 09:32:49.622619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:46.443 [2024-10-16 09:32:49.622638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:45160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.443 [2024-10-16 09:32:49.622650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:46.443 [2024-10-16 09:32:49.622669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:45168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.443 [2024-10-16 09:32:49.622681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:46.443 [2024-10-16 09:32:49.622700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:45176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.443 [2024-10-16 09:32:49.622712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:46.443 [2024-10-16 09:32:49.622731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:45184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.443 [2024-10-16 09:32:49.622765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:46.443 [2024-10-16 09:32:49.622786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:44616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.443 [2024-10-16 09:32:49.622799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:46.443 [2024-10-16 09:32:49.622817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:44624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.443 [2024-10-16 09:32:49.622830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:46.443 [2024-10-16 09:32:49.622849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:44632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.443 [2024-10-16 09:32:49.622861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:46.443 [2024-10-16 09:32:49.622879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:44640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.443 [2024-10-16 09:32:49.622892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:46.443 [2024-10-16 09:32:49.622910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:44648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.443 [2024-10-16 09:32:49.622922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:46.443 [2024-10-16 09:32:49.622940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:44656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.443 [2024-10-16 09:32:49.622952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:46.443 [2024-10-16 09:32:49.622970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:44664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.443 [2024-10-16 09:32:49.622983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:46.443 [2024-10-16 09:32:49.623000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:44672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.443 [2024-10-16 09:32:49.623013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:46.443 [2024-10-16 09:32:49.623057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:45192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.443 [2024-10-16 09:32:49.623077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.443 [2024-10-16 09:32:49.623093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:45200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.443 [2024-10-16 09:32:49.623106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.443 [2024-10-16 09:32:49.623119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:45208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.443 [2024-10-16 09:32:49.623131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.443 [2024-10-16 09:32:49.623144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:45216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.443 [2024-10-16 09:32:49.623166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.443 [2024-10-16 09:32:49.623181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:45224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.443 [2024-10-16 09:32:49.623193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.443 [2024-10-16 09:32:49.623206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.443 [2024-10-16 09:32:49.623218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.443 [2024-10-16 09:32:49.623232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:45240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.443 [2024-10-16 09:32:49.623244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.443 [2024-10-16 09:32:49.623258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:45248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.443 [2024-10-16 09:32:49.623269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.443 [2024-10-16 09:32:49.623283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:44680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.443 [2024-10-16 09:32:49.623295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.443 [2024-10-16 09:32:49.623309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:44688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.443 [2024-10-16 09:32:49.623321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.443 [2024-10-16 09:32:49.623334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:44696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.443 [2024-10-16 09:32:49.623345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.443 [2024-10-16 09:32:49.623359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:44704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.443 [2024-10-16 09:32:49.623371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.443 [2024-10-16 09:32:49.623384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:44712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.443 [2024-10-16 09:32:49.623396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.443 [2024-10-16 09:32:49.623409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:44720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.443 [2024-10-16 09:32:49.623421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.443 [2024-10-16 09:32:49.623434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:44728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.443 [2024-10-16 09:32:49.623446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.443 [2024-10-16 09:32:49.623459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:44736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.443 [2024-10-16 09:32:49.623471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.443 [2024-10-16 09:32:49.623484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:44744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.443 [2024-10-16 09:32:49.623502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.443 [2024-10-16 09:32:49.623535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:44752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.443 [2024-10-16 09:32:49.623548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.443 [2024-10-16 09:32:49.623576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:44760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.443 [2024-10-16 09:32:49.623589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.443 [2024-10-16 09:32:49.623603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:44768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.443 [2024-10-16 09:32:49.623616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.443 [2024-10-16 09:32:49.623630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:44776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.443 [2024-10-16 09:32:49.623642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.443 [2024-10-16 09:32:49.623655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:44784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.444 [2024-10-16 09:32:49.623668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.444 [2024-10-16 09:32:49.623681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:44792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.444 [2024-10-16 09:32:49.623693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.444 [2024-10-16 09:32:49.623707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:44800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.444 [2024-10-16 09:32:49.623719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.444 [2024-10-16 09:32:49.623733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:45256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.444 [2024-10-16 09:32:49.623745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.444 [2024-10-16 09:32:49.623758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:45264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.444 [2024-10-16 09:32:49.623771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.444 [2024-10-16 09:32:49.623784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:45272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.444 [2024-10-16 09:32:49.623796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.444 [2024-10-16 09:32:49.623809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:45280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.444 [2024-10-16 09:32:49.623822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.444 [2024-10-16 09:32:49.623835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:45288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.444 [2024-10-16 09:32:49.623847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.444 [2024-10-16 09:32:49.623867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:45296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.444 [2024-10-16 09:32:49.623880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.444 [2024-10-16 09:32:49.623893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:45304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.444 [2024-10-16 09:32:49.623906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.444 [2024-10-16 09:32:49.623919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:45312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.444 [2024-10-16 09:32:49.623931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.444 [2024-10-16 09:32:49.623945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:44808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.444 [2024-10-16 09:32:49.623957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.444 [2024-10-16 09:32:49.623972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:44816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.444 [2024-10-16 09:32:49.623985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.444 [2024-10-16 09:32:49.623998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:44824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.444 [2024-10-16 09:32:49.624010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.444 [2024-10-16 09:32:49.624024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:44832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.444 [2024-10-16 09:32:49.624036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.444 [2024-10-16 09:32:49.624050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:44840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.444 [2024-10-16 09:32:49.624062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.444 [2024-10-16 09:32:49.624075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:44848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.444 [2024-10-16 09:32:49.624088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.444 [2024-10-16 09:32:49.624101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:44856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.444 [2024-10-16 09:32:49.624113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.444 [2024-10-16 09:32:49.624127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:44864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.444 [2024-10-16 09:32:49.624139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.444 [2024-10-16 09:32:49.624152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:44872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.444 [2024-10-16 09:32:49.624164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.444 [2024-10-16 09:32:49.624178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:44880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.444 [2024-10-16 09:32:49.624195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.444 [2024-10-16 09:32:49.624209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:44888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.444 [2024-10-16 09:32:49.624222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.444 [2024-10-16 09:32:49.624236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:44896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.444 [2024-10-16 09:32:49.624248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.444 [2024-10-16 09:32:49.624261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:44904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.444 [2024-10-16 09:32:49.624274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.444 [2024-10-16 09:32:49.624287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:44912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.444 [2024-10-16 09:32:49.624299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.444 [2024-10-16 09:32:49.624313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:44920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.444 [2024-10-16 09:32:49.624326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.444 [2024-10-16 09:32:49.624339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:44928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.444 [2024-10-16 09:32:49.624351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.444 [2024-10-16 09:32:49.624365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:45320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.444 [2024-10-16 09:32:49.624377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.444 [2024-10-16 09:32:49.624392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:45328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.444 [2024-10-16 09:32:49.624405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.444 [2024-10-16 09:32:49.624418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:45336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.444 [2024-10-16 09:32:49.624431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.444 [2024-10-16 09:32:49.624445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.444 [2024-10-16 09:32:49.624457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.444 [2024-10-16 09:32:49.624470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.444 [2024-10-16 09:32:49.624483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.444 [2024-10-16 09:32:49.624496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:45360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.444 [2024-10-16 09:32:49.624509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.444 [2024-10-16 09:32:49.624527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:45368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.444 [2024-10-16 09:32:49.624568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.444 [2024-10-16 09:32:49.624584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:45376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.444 [2024-10-16 09:32:49.624597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.444 [2024-10-16 09:32:49.624611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:44936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.444 [2024-10-16 09:32:49.624624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.444 [2024-10-16 09:32:49.624638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:44944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.444 [2024-10-16 09:32:49.624651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.444 [2024-10-16 09:32:49.624665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:44952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.444 [2024-10-16 09:32:49.624677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.445 [2024-10-16 09:32:49.624691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:44960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.445 [2024-10-16 09:32:49.624705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.445 [2024-10-16 09:32:49.624719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:44968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.445 [2024-10-16 09:32:49.624732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.445 [2024-10-16 09:32:49.624746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:44976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.445 [2024-10-16 09:32:49.624759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.445 [2024-10-16 09:32:49.624773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:44984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.445 [2024-10-16 09:32:49.624785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.445 [2024-10-16 09:32:49.624799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:44992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.445 [2024-10-16 09:32:49.624812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.445 [2024-10-16 09:32:49.624825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:45384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.445 [2024-10-16 09:32:49.624838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.445 [2024-10-16 09:32:49.624852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:45392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.445 [2024-10-16 09:32:49.624864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.445 [2024-10-16 09:32:49.624879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:45400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.445 [2024-10-16 09:32:49.624891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.445 [2024-10-16 09:32:49.624911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:45408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.445 [2024-10-16 09:32:49.624925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.445 [2024-10-16 09:32:49.624950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:45416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.445 [2024-10-16 09:32:49.624962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.445 [2024-10-16 09:32:49.624976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:45424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.445 [2024-10-16 09:32:49.624989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.445 [2024-10-16 09:32:49.625002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:45432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.445 [2024-10-16 09:32:49.625014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.445 [2024-10-16 09:32:49.625027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:45440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.445 [2024-10-16 09:32:49.625040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.445 [2024-10-16 09:32:49.625054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:45448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.445 [2024-10-16 09:32:49.625103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.445 [2024-10-16 09:32:49.625118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:45456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.445 [2024-10-16 09:32:49.625132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.445 [2024-10-16 09:32:49.625146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:45464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.445 [2024-10-16 09:32:49.625160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.445 [2024-10-16 09:32:49.625174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:45472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.445 [2024-10-16 09:32:49.625188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.445 [2024-10-16 09:32:49.625202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:45480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.445 [2024-10-16 09:32:49.625215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.445 [2024-10-16 09:32:49.625230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:45488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.445 [2024-10-16 09:32:49.625243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.445 [2024-10-16 09:32:49.625258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:45496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.445 [2024-10-16 09:32:49.625270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.445 [2024-10-16 09:32:49.625285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:45504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.445 [2024-10-16 09:32:49.625305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.445 [2024-10-16 09:32:49.625320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:45000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.445 [2024-10-16 09:32:49.625333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.445 [2024-10-16 09:32:49.625348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:45008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.445 [2024-10-16 09:32:49.625361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.445 [2024-10-16 09:32:49.625390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:45016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.445 [2024-10-16 09:32:49.625403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.445 [2024-10-16 09:32:49.625417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:45024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.445 [2024-10-16 09:32:49.625430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.445 [2024-10-16 09:32:49.625444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:45032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.445 [2024-10-16 09:32:49.625456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.445 [2024-10-16 09:32:49.625471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:45040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.445 [2024-10-16 09:32:49.625483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.445 [2024-10-16 09:32:49.625497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:45048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.445 [2024-10-16 09:32:49.625510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.445 [2024-10-16 09:32:49.625538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:45056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.445 [2024-10-16 09:32:49.625550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.445 [2024-10-16 09:32:49.625564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.445 [2024-10-16 09:32:49.625576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.445 [2024-10-16 09:32:49.625590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:45072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.445 [2024-10-16 09:32:49.625612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.445 [2024-10-16 09:32:49.625643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:45080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.445 [2024-10-16 09:32:49.625656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.445 [2024-10-16 09:32:49.625670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:45088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.445 [2024-10-16 09:32:49.625683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.445 [2024-10-16 09:32:49.625704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:45096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.445 [2024-10-16 09:32:49.625717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.446 [2024-10-16 09:32:49.625731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:45104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.446 [2024-10-16 09:32:49.625744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.446 [2024-10-16 09:32:49.625758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:45112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.446 [2024-10-16 09:32:49.625771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.446 [2024-10-16 09:32:49.625784] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a5320 is same with the state(6) to be set 00:18:46.446 [2024-10-16 09:32:49.625799] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.446 [2024-10-16 09:32:49.625810] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.446 [2024-10-16 09:32:49.625820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45120 len:8 PRP1 0x0 PRP2 0x0 00:18:46.446 [2024-10-16 09:32:49.625833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.446 [2024-10-16 09:32:49.625846] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.446 [2024-10-16 09:32:49.625855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.446 [2024-10-16 09:32:49.625865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45512 len:8 PRP1 0x0 PRP2 0x0 00:18:46.446 [2024-10-16 09:32:49.625877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.446 [2024-10-16 09:32:49.625890] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.446 [2024-10-16 09:32:49.625899] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.446 [2024-10-16 09:32:49.625908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45520 len:8 PRP1 0x0 PRP2 0x0 00:18:46.446 [2024-10-16 09:32:49.625921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.446 [2024-10-16 09:32:49.625933] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.446 [2024-10-16 09:32:49.625942] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.446 [2024-10-16 09:32:49.625952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45528 len:8 PRP1 0x0 PRP2 0x0 00:18:46.446 [2024-10-16 09:32:49.625964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.446 [2024-10-16 09:32:49.625976] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.446 [2024-10-16 09:32:49.625985] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.446 [2024-10-16 09:32:49.625995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45536 len:8 PRP1 0x0 PRP2 0x0 00:18:46.446 [2024-10-16 09:32:49.626021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.446 [2024-10-16 09:32:49.626034] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.446 [2024-10-16 09:32:49.626042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.446 [2024-10-16 09:32:49.626051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45544 len:8 PRP1 0x0 PRP2 0x0 00:18:46.446 [2024-10-16 09:32:49.626069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.446 [2024-10-16 09:32:49.626082] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.446 [2024-10-16 09:32:49.626092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.446 [2024-10-16 09:32:49.626101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45552 len:8 PRP1 0x0 PRP2 0x0 00:18:46.446 [2024-10-16 09:32:49.626113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.446 [2024-10-16 09:32:49.626125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.446 [2024-10-16 09:32:49.626134] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.446 [2024-10-16 09:32:49.626143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45560 len:8 PRP1 0x0 PRP2 0x0 00:18:46.446 [2024-10-16 09:32:49.626155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.446 [2024-10-16 09:32:49.626167] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.446 [2024-10-16 09:32:49.626175] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.446 [2024-10-16 09:32:49.626185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45568 len:8 PRP1 0x0 PRP2 0x0 00:18:46.446 [2024-10-16 09:32:49.626205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.446 [2024-10-16 09:32:49.626218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.446 [2024-10-16 09:32:49.626226] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.446 [2024-10-16 09:32:49.626235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45576 len:8 PRP1 0x0 PRP2 0x0 00:18:46.446 [2024-10-16 09:32:49.626247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.446 [2024-10-16 09:32:49.626259] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.446 [2024-10-16 09:32:49.626268] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.446 [2024-10-16 09:32:49.626277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45584 len:8 PRP1 0x0 PRP2 0x0 00:18:46.446 [2024-10-16 09:32:49.626289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.446 [2024-10-16 09:32:49.626301] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.446 [2024-10-16 09:32:49.626309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.446 [2024-10-16 09:32:49.626319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45592 len:8 PRP1 0x0 PRP2 0x0 00:18:46.446 [2024-10-16 09:32:49.626330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.446 [2024-10-16 09:32:49.626342] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.446 [2024-10-16 09:32:49.626351] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.446 [2024-10-16 09:32:49.626360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45600 len:8 PRP1 0x0 PRP2 0x0 00:18:46.446 [2024-10-16 09:32:49.626377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.446 [2024-10-16 09:32:49.626390] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.446 [2024-10-16 09:32:49.626403] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.446 [2024-10-16 09:32:49.626413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45608 len:8 PRP1 0x0 PRP2 0x0 00:18:46.446 [2024-10-16 09:32:49.626425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.446 [2024-10-16 09:32:49.626437] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.446 [2024-10-16 09:32:49.626446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.446 [2024-10-16 09:32:49.626455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45616 len:8 PRP1 0x0 PRP2 0x0 00:18:46.446 [2024-10-16 09:32:49.626467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.446 [2024-10-16 09:32:49.626479] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.446 [2024-10-16 09:32:49.626488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.446 [2024-10-16 09:32:49.626497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45624 len:8 PRP1 0x0 PRP2 0x0 00:18:46.446 [2024-10-16 09:32:49.626509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.446 [2024-10-16 09:32:49.626520] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.446 [2024-10-16 09:32:49.626529] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.446 [2024-10-16 09:32:49.626538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45632 len:8 PRP1 0x0 PRP2 0x0 00:18:46.446 [2024-10-16 09:32:49.626555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.446 [2024-10-16 09:32:49.627487] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19a5320 was disconnected and freed. reset controller. 00:18:46.446 [2024-10-16 09:32:49.628558] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:46.446 [2024-10-16 09:32:49.628644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.446 [2024-10-16 09:32:49.628667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.446 [2024-10-16 09:32:49.628695] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1998d20 (9): Bad file descriptor 00:18:46.446 [2024-10-16 09:32:49.629110] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:46.446 [2024-10-16 09:32:49.629142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1998d20 with addr=10.0.0.3, port=4421 00:18:46.446 [2024-10-16 09:32:49.629160] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1998d20 is same with the state(6) to be set 00:18:46.446 [2024-10-16 09:32:49.629194] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1998d20 (9): Bad file descriptor 00:18:46.446 [2024-10-16 09:32:49.629225] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:46.446 [2024-10-16 09:32:49.629242] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:46.446 [2024-10-16 09:32:49.629255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:46.446 [2024-10-16 09:32:49.629286] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:46.446 [2024-10-16 09:32:49.629304] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:46.446 7200.72 IOPS, 28.13 MiB/s [2024-10-16T09:33:10.850Z] 7264.05 IOPS, 28.38 MiB/s [2024-10-16T09:33:10.850Z] 7329.95 IOPS, 28.63 MiB/s [2024-10-16T09:33:10.850Z] 7394.10 IOPS, 28.88 MiB/s [2024-10-16T09:33:10.850Z] 7455.85 IOPS, 29.12 MiB/s [2024-10-16T09:33:10.850Z] 7512.83 IOPS, 29.35 MiB/s [2024-10-16T09:33:10.850Z] 7569.57 IOPS, 29.57 MiB/s [2024-10-16T09:33:10.850Z] 7617.35 IOPS, 29.76 MiB/s [2024-10-16T09:33:10.850Z] 7666.77 IOPS, 29.95 MiB/s [2024-10-16T09:33:10.850Z] 7714.89 IOPS, 30.14 MiB/s [2024-10-16T09:33:10.850Z] [2024-10-16 09:32:59.682564] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:46.446 7762.93 IOPS, 30.32 MiB/s [2024-10-16T09:33:10.850Z] 7808.15 IOPS, 30.50 MiB/s [2024-10-16T09:33:10.851Z] 7852.48 IOPS, 30.67 MiB/s [2024-10-16T09:33:10.851Z] 7895.49 IOPS, 30.84 MiB/s [2024-10-16T09:33:10.851Z] 7928.46 IOPS, 30.97 MiB/s [2024-10-16T09:33:10.851Z] 7964.84 IOPS, 31.11 MiB/s [2024-10-16T09:33:10.851Z] 8000.37 IOPS, 31.25 MiB/s [2024-10-16T09:33:10.851Z] 8035.15 IOPS, 31.39 MiB/s [2024-10-16T09:33:10.851Z] 8068.87 IOPS, 31.52 MiB/s [2024-10-16T09:33:10.851Z] 8101.95 IOPS, 31.65 MiB/s [2024-10-16T09:33:10.851Z] Received shutdown signal, test time was about 55.461082 seconds 00:18:46.447 00:18:46.447 Latency(us) 00:18:46.447 [2024-10-16T09:33:10.851Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.447 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:46.447 Verification LBA range: start 0x0 length 0x4000 00:18:46.447 Nvme0n1 : 55.46 8109.99 31.68 0.00 0.00 15753.40 363.05 7046430.72 00:18:46.447 [2024-10-16T09:33:10.851Z] =================================================================================================================== 00:18:46.447 [2024-10-16T09:33:10.851Z] Total : 8109.99 31.68 0.00 0.00 15753.40 363.05 7046430.72 00:18:46.447 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:46.447 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:18:46.447 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:46.447 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:18:46.447 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:46.447 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:18:46.447 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:46.447 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:18:46.447 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:46.447 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:46.447 rmmod nvme_tcp 00:18:46.447 rmmod nvme_fabrics 00:18:46.447 rmmod nvme_keyring 00:18:46.447 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:46.447 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:18:46.447 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:18:46.447 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@515 -- # '[' -n 80120 ']' 00:18:46.447 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # killprocess 80120 00:18:46.447 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 80120 ']' 00:18:46.447 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 80120 00:18:46.447 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:18:46.447 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:46.447 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80120 00:18:46.447 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:46.447 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:46.447 killing process with pid 80120 00:18:46.447 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80120' 00:18:46.447 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 80120 00:18:46.447 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 80120 00:18:46.447 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:46.447 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:18:46.447 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:18:46.447 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:18:46.447 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@789 -- # iptables-save 00:18:46.447 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:18:46.447 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:18:46.447 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:46.447 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:46.447 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:46.447 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:46.447 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:46.447 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:46.447 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:46.447 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:46.447 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:46.447 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:46.447 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:46.706 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:46.706 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:46.706 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:46.706 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:46.706 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:46.706 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:46.706 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:46.706 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.706 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:18:46.706 00:18:46.706 real 1m0.589s 00:18:46.706 user 2m46.587s 00:18:46.706 sys 0m19.337s 00:18:46.706 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:46.706 ************************************ 00:18:46.706 END TEST nvmf_host_multipath 00:18:46.706 ************************************ 00:18:46.706 09:33:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:46.706 09:33:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:18:46.706 09:33:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:46.706 09:33:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:46.706 09:33:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.706 ************************************ 00:18:46.706 START TEST nvmf_timeout 00:18:46.706 ************************************ 00:18:46.706 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:18:46.706 * Looking for test storage... 00:18:46.706 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:46.706 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:46.706 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1691 -- # lcov --version 00:18:46.706 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:46.966 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:46.966 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:46.966 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:46.966 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:46.966 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:18:46.966 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:18:46.966 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:18:46.966 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:18:46.966 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:18:46.966 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:18:46.966 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:18:46.966 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:46.966 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:18:46.966 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:18:46.966 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:46.966 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:46.966 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:18:46.966 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:18:46.966 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:46.966 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:18:46.966 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:18:46.966 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:18:46.966 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:18:46.966 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:46.966 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:18:46.966 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:18:46.966 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:46.966 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:46.966 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:18:46.966 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:46.966 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:46.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.966 --rc genhtml_branch_coverage=1 00:18:46.966 --rc genhtml_function_coverage=1 00:18:46.966 --rc genhtml_legend=1 00:18:46.966 --rc geninfo_all_blocks=1 00:18:46.966 --rc geninfo_unexecuted_blocks=1 00:18:46.966 00:18:46.966 ' 00:18:46.966 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:46.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.966 --rc genhtml_branch_coverage=1 00:18:46.966 --rc genhtml_function_coverage=1 00:18:46.966 --rc genhtml_legend=1 00:18:46.966 --rc geninfo_all_blocks=1 00:18:46.966 --rc geninfo_unexecuted_blocks=1 00:18:46.966 00:18:46.966 ' 00:18:46.966 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:46.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.966 --rc genhtml_branch_coverage=1 00:18:46.966 --rc genhtml_function_coverage=1 00:18:46.966 --rc genhtml_legend=1 00:18:46.966 --rc geninfo_all_blocks=1 00:18:46.966 --rc geninfo_unexecuted_blocks=1 00:18:46.966 00:18:46.966 ' 00:18:46.966 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:46.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.967 --rc genhtml_branch_coverage=1 00:18:46.967 --rc genhtml_function_coverage=1 00:18:46.967 --rc genhtml_legend=1 00:18:46.967 --rc geninfo_all_blocks=1 00:18:46.967 --rc geninfo_unexecuted_blocks=1 00:18:46.967 00:18:46.967 ' 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5989d9e2-d339-420e-a2f4-bd87604f111f 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:46.967 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@458 -- # nvmf_veth_init 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:46.967 Cannot find device "nvmf_init_br" 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:46.967 Cannot find device "nvmf_init_br2" 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:46.967 Cannot find device "nvmf_tgt_br" 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:46.967 Cannot find device "nvmf_tgt_br2" 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:46.967 Cannot find device "nvmf_init_br" 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:46.967 Cannot find device "nvmf_init_br2" 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:46.967 Cannot find device "nvmf_tgt_br" 00:18:46.967 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:18:46.968 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:46.968 Cannot find device "nvmf_tgt_br2" 00:18:46.968 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:18:46.968 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:46.968 Cannot find device "nvmf_br" 00:18:46.968 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:18:46.968 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:46.968 Cannot find device "nvmf_init_if" 00:18:46.968 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:18:46.968 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:46.968 Cannot find device "nvmf_init_if2" 00:18:46.968 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:18:46.968 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:46.968 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:46.968 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:18:46.968 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:47.227 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:47.227 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:18:47.227 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:47.227 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:47.227 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:47.227 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:47.227 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:47.227 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:47.227 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:47.227 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:47.227 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:47.227 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:47.227 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:47.227 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:47.227 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:47.227 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:47.227 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:47.227 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:47.227 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:47.227 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:47.227 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:47.227 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:47.227 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:47.227 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:47.227 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:47.227 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:47.227 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:47.227 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:47.227 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:47.227 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:47.227 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:47.227 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:47.227 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:47.227 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:47.227 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:47.227 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:47.227 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:18:47.227 00:18:47.227 --- 10.0.0.3 ping statistics --- 00:18:47.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:47.227 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:18:47.227 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:47.227 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:47.227 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:18:47.227 00:18:47.227 --- 10.0.0.4 ping statistics --- 00:18:47.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:47.227 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:18:47.227 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:47.227 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:47.227 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:18:47.227 00:18:47.227 --- 10.0.0.1 ping statistics --- 00:18:47.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:47.227 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:18:47.227 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:47.227 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:47.227 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.038 ms 00:18:47.228 00:18:47.228 --- 10.0.0.2 ping statistics --- 00:18:47.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:47.228 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:18:47.228 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:47.228 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # return 0 00:18:47.228 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:47.228 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:47.228 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:18:47.228 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:18:47.228 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:47.228 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:18:47.228 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:18:47.228 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:18:47.228 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:47.228 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:47.228 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:47.228 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # nvmfpid=81333 00:18:47.228 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:47.228 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # waitforlisten 81333 00:18:47.228 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 81333 ']' 00:18:47.228 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:47.228 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:47.228 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:47.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:47.228 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:47.228 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:47.488 [2024-10-16 09:33:11.653238] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:18:47.488 [2024-10-16 09:33:11.653323] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:47.488 [2024-10-16 09:33:11.786994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:47.488 [2024-10-16 09:33:11.837115] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:47.488 [2024-10-16 09:33:11.837344] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:47.488 [2024-10-16 09:33:11.837413] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:47.488 [2024-10-16 09:33:11.837511] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:47.488 [2024-10-16 09:33:11.837572] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:47.488 [2024-10-16 09:33:11.838958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:47.488 [2024-10-16 09:33:11.838971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:47.488 [2024-10-16 09:33:11.891738] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:47.747 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:47.747 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:18:47.747 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:47.747 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:47.747 09:33:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:47.747 09:33:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:47.747 09:33:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:47.747 09:33:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:48.006 [2024-10-16 09:33:12.290462] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:48.006 09:33:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:48.265 Malloc0 00:18:48.265 09:33:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:48.524 09:33:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:49.092 09:33:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:49.092 [2024-10-16 09:33:13.411588] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:49.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:49.092 09:33:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:18:49.092 09:33:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=81380 00:18:49.092 09:33:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 81380 /var/tmp/bdevperf.sock 00:18:49.092 09:33:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 81380 ']' 00:18:49.092 09:33:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:49.092 09:33:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:49.092 09:33:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:49.092 09:33:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:49.092 09:33:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:49.092 [2024-10-16 09:33:13.469290] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:18:49.092 [2024-10-16 09:33:13.469416] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81380 ] 00:18:49.356 [2024-10-16 09:33:13.598378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.356 [2024-10-16 09:33:13.640737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:49.356 [2024-10-16 09:33:13.693921] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:49.356 09:33:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:49.356 09:33:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:18:49.356 09:33:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:49.615 09:33:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:18:49.873 NVMe0n1 00:18:50.132 09:33:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=81395 00:18:50.132 09:33:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:50.132 09:33:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:18:50.132 Running I/O for 10 seconds... 00:18:51.067 09:33:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:51.329 7956.00 IOPS, 31.08 MiB/s [2024-10-16T09:33:15.733Z] [2024-10-16 09:33:15.546176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfacc70 is same with the state(6) to be set 00:18:51.329 [2024-10-16 09:33:15.546241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfacc70 is same with the state(6) to be set 00:18:51.329 [2024-10-16 09:33:15.546267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfacc70 is same with the state(6) to be set 00:18:51.329 [2024-10-16 09:33:15.546275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfacc70 is same with the state(6) to be set 00:18:51.329 [2024-10-16 09:33:15.546282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfacc70 is same with the state(6) to be set 00:18:51.329 [2024-10-16 09:33:15.546793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:74128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.329 [2024-10-16 09:33:15.546833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.329 [2024-10-16 09:33:15.546854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:74136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.329 [2024-10-16 09:33:15.546865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.329 [2024-10-16 09:33:15.546876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:74144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.329 [2024-10-16 09:33:15.546885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.329 [2024-10-16 09:33:15.546911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:74152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.329 [2024-10-16 09:33:15.546919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.329 [2024-10-16 09:33:15.546929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:74160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.329 [2024-10-16 09:33:15.546937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.329 [2024-10-16 09:33:15.546947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:74168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.329 [2024-10-16 09:33:15.546956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.329 [2024-10-16 09:33:15.546966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.329 [2024-10-16 09:33:15.546975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.329 [2024-10-16 09:33:15.546985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:74504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.329 [2024-10-16 09:33:15.547009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.329 [2024-10-16 09:33:15.547019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:74512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.329 [2024-10-16 09:33:15.547028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.329 [2024-10-16 09:33:15.547038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:74520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.329 [2024-10-16 09:33:15.547046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.329 [2024-10-16 09:33:15.547056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:74528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.329 [2024-10-16 09:33:15.547065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.329 [2024-10-16 09:33:15.547075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:74536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.329 [2024-10-16 09:33:15.547083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.329 [2024-10-16 09:33:15.547093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:74544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.329 [2024-10-16 09:33:15.547101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.329 [2024-10-16 09:33:15.547111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:74552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.329 [2024-10-16 09:33:15.547119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.329 [2024-10-16 09:33:15.547130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:74176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.329 [2024-10-16 09:33:15.547138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.329 [2024-10-16 09:33:15.547147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:74184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.329 [2024-10-16 09:33:15.547156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.329 [2024-10-16 09:33:15.547166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:74192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.329 [2024-10-16 09:33:15.547177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.329 [2024-10-16 09:33:15.547187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:74200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.329 [2024-10-16 09:33:15.547196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.329 [2024-10-16 09:33:15.547207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:74208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.329 [2024-10-16 09:33:15.547215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.329 [2024-10-16 09:33:15.547226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:74216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.329 [2024-10-16 09:33:15.547234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.329 [2024-10-16 09:33:15.547244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:74224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.329 [2024-10-16 09:33:15.547252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.329 [2024-10-16 09:33:15.547263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:74232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.329 [2024-10-16 09:33:15.547271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.329 [2024-10-16 09:33:15.547281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:74560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.330 [2024-10-16 09:33:15.547290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.330 [2024-10-16 09:33:15.547300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:74568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.330 [2024-10-16 09:33:15.547308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.330 [2024-10-16 09:33:15.547319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:74576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.330 [2024-10-16 09:33:15.547327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.330 [2024-10-16 09:33:15.547337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:74584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.330 [2024-10-16 09:33:15.547345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.330 [2024-10-16 09:33:15.547355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:74592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.330 [2024-10-16 09:33:15.547364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.330 [2024-10-16 09:33:15.547374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:74600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.330 [2024-10-16 09:33:15.547382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.330 [2024-10-16 09:33:15.547392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:74608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.330 [2024-10-16 09:33:15.547401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.330 [2024-10-16 09:33:15.547411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:74616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.330 [2024-10-16 09:33:15.547419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.330 [2024-10-16 09:33:15.547429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:74624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.330 [2024-10-16 09:33:15.547437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.330 [2024-10-16 09:33:15.547448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:74632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.330 [2024-10-16 09:33:15.547456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.330 [2024-10-16 09:33:15.547467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:74640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.330 [2024-10-16 09:33:15.547476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.330 [2024-10-16 09:33:15.547487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:74648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.330 [2024-10-16 09:33:15.547496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.330 [2024-10-16 09:33:15.547506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.330 [2024-10-16 09:33:15.547515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.330 [2024-10-16 09:33:15.547526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:74664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.330 [2024-10-16 09:33:15.547534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.330 [2024-10-16 09:33:15.547544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:74672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.330 [2024-10-16 09:33:15.547552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.330 [2024-10-16 09:33:15.547562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:74680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.330 [2024-10-16 09:33:15.547585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.330 [2024-10-16 09:33:15.547597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:74688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.330 [2024-10-16 09:33:15.547606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.330 [2024-10-16 09:33:15.547616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:74696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.330 [2024-10-16 09:33:15.547625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.330 [2024-10-16 09:33:15.547635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:74704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.330 [2024-10-16 09:33:15.547643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.330 [2024-10-16 09:33:15.547653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:74712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.330 [2024-10-16 09:33:15.547662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.330 [2024-10-16 09:33:15.547672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:74720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.330 [2024-10-16 09:33:15.547680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.330 [2024-10-16 09:33:15.547691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:74728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.330 [2024-10-16 09:33:15.547699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.330 [2024-10-16 09:33:15.547709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:74736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.330 [2024-10-16 09:33:15.547717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.330 [2024-10-16 09:33:15.547727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.330 [2024-10-16 09:33:15.547735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.330 [2024-10-16 09:33:15.547745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:74240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.330 [2024-10-16 09:33:15.547754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.330 [2024-10-16 09:33:15.547764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:74248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.330 [2024-10-16 09:33:15.547773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.330 [2024-10-16 09:33:15.547783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.330 [2024-10-16 09:33:15.547794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.330 [2024-10-16 09:33:15.547805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:74264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.330 [2024-10-16 09:33:15.547813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.330 [2024-10-16 09:33:15.547824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:74272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.330 [2024-10-16 09:33:15.547832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.330 [2024-10-16 09:33:15.547843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:74280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.330 [2024-10-16 09:33:15.547851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.330 [2024-10-16 09:33:15.547862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:74288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.330 [2024-10-16 09:33:15.547870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.330 [2024-10-16 09:33:15.547881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:74296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.330 [2024-10-16 09:33:15.547889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.330 [2024-10-16 09:33:15.547899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:74752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.330 [2024-10-16 09:33:15.547908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.330 [2024-10-16 09:33:15.547918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:74760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.330 [2024-10-16 09:33:15.547926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.330 [2024-10-16 09:33:15.547936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:74768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.330 [2024-10-16 09:33:15.547945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.330 [2024-10-16 09:33:15.547955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:74776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.330 [2024-10-16 09:33:15.547963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.330 [2024-10-16 09:33:15.547973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:74784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.331 [2024-10-16 09:33:15.547982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.331 [2024-10-16 09:33:15.547992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.331 [2024-10-16 09:33:15.548000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.331 [2024-10-16 09:33:15.548011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:74800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.331 [2024-10-16 09:33:15.548019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.331 [2024-10-16 09:33:15.548029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:74808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.331 [2024-10-16 09:33:15.548038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.331 [2024-10-16 09:33:15.548048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:74816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.331 [2024-10-16 09:33:15.548056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.331 [2024-10-16 09:33:15.548066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:74824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.331 [2024-10-16 09:33:15.548075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.331 [2024-10-16 09:33:15.548084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:74832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.331 [2024-10-16 09:33:15.548094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.331 [2024-10-16 09:33:15.548105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:74840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.331 [2024-10-16 09:33:15.548114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.331 [2024-10-16 09:33:15.548124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:74848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.331 [2024-10-16 09:33:15.548133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.331 [2024-10-16 09:33:15.548143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:74856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.331 [2024-10-16 09:33:15.548151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.331 [2024-10-16 09:33:15.548162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:74864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.331 [2024-10-16 09:33:15.548170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.331 [2024-10-16 09:33:15.548181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:74872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.331 [2024-10-16 09:33:15.548189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.331 [2024-10-16 09:33:15.548199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:74304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.331 [2024-10-16 09:33:15.548207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.331 [2024-10-16 09:33:15.548218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:74312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.331 [2024-10-16 09:33:15.548226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.331 [2024-10-16 09:33:15.548236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:74320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.331 [2024-10-16 09:33:15.548244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.331 [2024-10-16 09:33:15.548254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:74328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.331 [2024-10-16 09:33:15.548263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.331 [2024-10-16 09:33:15.548273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:74336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.331 [2024-10-16 09:33:15.548281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.331 [2024-10-16 09:33:15.548291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:74344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.331 [2024-10-16 09:33:15.548299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.331 [2024-10-16 09:33:15.548310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:74352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.331 [2024-10-16 09:33:15.548318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.331 [2024-10-16 09:33:15.548328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:74360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.331 [2024-10-16 09:33:15.548354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.331 [2024-10-16 09:33:15.548364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.331 [2024-10-16 09:33:15.548373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.331 [2024-10-16 09:33:15.548383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.331 [2024-10-16 09:33:15.548392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.331 [2024-10-16 09:33:15.548403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:74896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.331 [2024-10-16 09:33:15.548413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.331 [2024-10-16 09:33:15.548424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:74904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.331 [2024-10-16 09:33:15.548433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.331 [2024-10-16 09:33:15.548444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:74912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.331 [2024-10-16 09:33:15.548453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.331 [2024-10-16 09:33:15.548464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:74920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.331 [2024-10-16 09:33:15.548472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.331 [2024-10-16 09:33:15.548482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:74928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.331 [2024-10-16 09:33:15.548491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.331 [2024-10-16 09:33:15.548502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:74936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.331 [2024-10-16 09:33:15.548510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.331 [2024-10-16 09:33:15.548521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:74944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.331 [2024-10-16 09:33:15.548530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.331 [2024-10-16 09:33:15.548540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:74952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.331 [2024-10-16 09:33:15.548549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.331 [2024-10-16 09:33:15.548568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:74960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.331 [2024-10-16 09:33:15.548578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.331 [2024-10-16 09:33:15.548589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:74968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.331 [2024-10-16 09:33:15.548598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.331 [2024-10-16 09:33:15.548614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:74976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.331 [2024-10-16 09:33:15.548623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.331 [2024-10-16 09:33:15.548638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:74984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.331 [2024-10-16 09:33:15.548647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.331 [2024-10-16 09:33:15.548658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:74992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.331 [2024-10-16 09:33:15.548666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.331 [2024-10-16 09:33:15.548677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:75000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.331 [2024-10-16 09:33:15.548685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.331 [2024-10-16 09:33:15.548696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:75008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.331 [2024-10-16 09:33:15.548704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.331 [2024-10-16 09:33:15.548715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:75016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.331 [2024-10-16 09:33:15.548724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.331 [2024-10-16 09:33:15.548736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:75024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.332 [2024-10-16 09:33:15.548745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.332 [2024-10-16 09:33:15.548756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:75032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:51.332 [2024-10-16 09:33:15.548765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.332 [2024-10-16 09:33:15.548776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.332 [2024-10-16 09:33:15.548794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.332 [2024-10-16 09:33:15.548805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:74376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.332 [2024-10-16 09:33:15.548814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.332 [2024-10-16 09:33:15.548824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:74384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.332 [2024-10-16 09:33:15.548833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.332 [2024-10-16 09:33:15.548844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:74392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.332 [2024-10-16 09:33:15.548853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.332 [2024-10-16 09:33:15.548863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:74400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.332 [2024-10-16 09:33:15.548872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.332 [2024-10-16 09:33:15.548882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:74408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.332 [2024-10-16 09:33:15.548891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.332 [2024-10-16 09:33:15.548901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:74416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.332 [2024-10-16 09:33:15.548910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.332 [2024-10-16 09:33:15.548921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:74424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.332 [2024-10-16 09:33:15.548929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.332 [2024-10-16 09:33:15.548939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:74432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.332 [2024-10-16 09:33:15.548948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.332 [2024-10-16 09:33:15.548966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:74440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.332 [2024-10-16 09:33:15.548975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.332 [2024-10-16 09:33:15.548986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:74448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.332 [2024-10-16 09:33:15.548995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.332 [2024-10-16 09:33:15.549031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:74456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.332 [2024-10-16 09:33:15.549040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.332 [2024-10-16 09:33:15.549051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:74464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.332 [2024-10-16 09:33:15.549060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.332 [2024-10-16 09:33:15.549072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:74472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.332 [2024-10-16 09:33:15.549081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.332 [2024-10-16 09:33:15.549092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:74480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.332 [2024-10-16 09:33:15.549100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.332 [2024-10-16 09:33:15.549111] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f8530 is same with the state(6) to be set 00:18:51.332 [2024-10-16 09:33:15.549123] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:51.332 [2024-10-16 09:33:15.549130] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:51.332 [2024-10-16 09:33:15.549143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74488 len:8 PRP1 0x0 PRP2 0x0 00:18:51.332 [2024-10-16 09:33:15.549152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.332 [2024-10-16 09:33:15.549163] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:51.332 [2024-10-16 09:33:15.549170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:51.332 [2024-10-16 09:33:15.549177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75040 len:8 PRP1 0x0 PRP2 0x0 00:18:51.332 [2024-10-16 09:33:15.549186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.332 [2024-10-16 09:33:15.549195] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:51.332 [2024-10-16 09:33:15.549203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:51.332 [2024-10-16 09:33:15.549210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75048 len:8 PRP1 0x0 PRP2 0x0 00:18:51.332 [2024-10-16 09:33:15.549219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.332 [2024-10-16 09:33:15.549227] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:51.332 [2024-10-16 09:33:15.549234] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:51.332 [2024-10-16 09:33:15.549242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75056 len:8 PRP1 0x0 PRP2 0x0 00:18:51.332 [2024-10-16 09:33:15.549250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.332 [2024-10-16 09:33:15.549259] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:51.332 [2024-10-16 09:33:15.549266] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:51.332 [2024-10-16 09:33:15.549273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75064 len:8 PRP1 0x0 PRP2 0x0 00:18:51.332 [2024-10-16 09:33:15.549286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.332 [2024-10-16 09:33:15.549296] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:51.332 [2024-10-16 09:33:15.549303] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:51.332 [2024-10-16 09:33:15.549311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75072 len:8 PRP1 0x0 PRP2 0x0 00:18:51.332 [2024-10-16 09:33:15.549320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.332 [2024-10-16 09:33:15.549329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:51.332 [2024-10-16 09:33:15.549336] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:51.332 [2024-10-16 09:33:15.549344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75080 len:8 PRP1 0x0 PRP2 0x0 00:18:51.332 [2024-10-16 09:33:15.549353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.332 [2024-10-16 09:33:15.549362] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:51.332 [2024-10-16 09:33:15.549369] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:51.332 [2024-10-16 09:33:15.549376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75088 len:8 PRP1 0x0 PRP2 0x0 00:18:51.332 [2024-10-16 09:33:15.549385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.332 [2024-10-16 09:33:15.549393] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:51.332 [2024-10-16 09:33:15.549401] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:51.332 [2024-10-16 09:33:15.549413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75096 len:8 PRP1 0x0 PRP2 0x0 00:18:51.332 [2024-10-16 09:33:15.549436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.332 [2024-10-16 09:33:15.549446] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:51.332 [2024-10-16 09:33:15.549452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:51.332 [2024-10-16 09:33:15.549460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75104 len:8 PRP1 0x0 PRP2 0x0 00:18:51.332 [2024-10-16 09:33:15.549468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.332 [2024-10-16 09:33:15.549476] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:51.332 [2024-10-16 09:33:15.549483] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:51.332 [2024-10-16 09:33:15.549491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75112 len:8 PRP1 0x0 PRP2 0x0 00:18:51.332 [2024-10-16 09:33:15.549499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.332 [2024-10-16 09:33:15.549507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:51.332 [2024-10-16 09:33:15.549514] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:51.332 [2024-10-16 09:33:15.549521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75120 len:8 PRP1 0x0 PRP2 0x0 00:18:51.332 [2024-10-16 09:33:15.549529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.332 [2024-10-16 09:33:15.549538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:51.333 [2024-10-16 09:33:15.549544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:51.333 [2024-10-16 09:33:15.549551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75128 len:8 PRP1 0x0 PRP2 0x0 00:18:51.333 [2024-10-16 09:33:15.549564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.333 [2024-10-16 09:33:15.549583] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:51.333 [2024-10-16 09:33:15.549591] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:51.333 [2024-10-16 09:33:15.549599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75136 len:8 PRP1 0x0 PRP2 0x0 00:18:51.333 [2024-10-16 09:33:15.549608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.333 [2024-10-16 09:33:15.549616] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:51.333 [2024-10-16 09:33:15.549623] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:51.333 [2024-10-16 09:33:15.549631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75144 len:8 PRP1 0x0 PRP2 0x0 00:18:51.333 [2024-10-16 09:33:15.549639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.333 [2024-10-16 09:33:15.550616] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21f8530 was disconnected and freed. reset controller. 00:18:51.333 [2024-10-16 09:33:15.550894] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:51.333 [2024-10-16 09:33:15.550993] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218a2e0 (9): Bad file descriptor 00:18:51.333 [2024-10-16 09:33:15.551091] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:51.333 [2024-10-16 09:33:15.551111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218a2e0 with addr=10.0.0.3, port=4420 00:18:51.333 [2024-10-16 09:33:15.551121] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218a2e0 is same with the state(6) to be set 00:18:51.333 [2024-10-16 09:33:15.551144] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218a2e0 (9): Bad file descriptor 00:18:51.333 [2024-10-16 09:33:15.551159] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:51.333 [2024-10-16 09:33:15.551168] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:51.333 [2024-10-16 09:33:15.551178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:51.333 [2024-10-16 09:33:15.551197] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:51.333 [2024-10-16 09:33:15.551207] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:51.333 09:33:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:18:53.205 4633.00 IOPS, 18.10 MiB/s [2024-10-16T09:33:17.609Z] 3088.67 IOPS, 12.07 MiB/s [2024-10-16T09:33:17.609Z] [2024-10-16 09:33:17.551289] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:53.205 [2024-10-16 09:33:17.551364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218a2e0 with addr=10.0.0.3, port=4420 00:18:53.205 [2024-10-16 09:33:17.551378] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218a2e0 is same with the state(6) to be set 00:18:53.205 [2024-10-16 09:33:17.551398] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218a2e0 (9): Bad file descriptor 00:18:53.205 [2024-10-16 09:33:17.551424] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:53.205 [2024-10-16 09:33:17.551434] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:53.205 [2024-10-16 09:33:17.551443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:53.205 [2024-10-16 09:33:17.551465] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:53.205 [2024-10-16 09:33:17.551476] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:53.205 09:33:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:18:53.205 09:33:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:53.206 09:33:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:18:53.464 09:33:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:18:53.464 09:33:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:18:53.464 09:33:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:18:53.464 09:33:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:18:54.031 09:33:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:18:54.032 09:33:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:18:55.409 2316.50 IOPS, 9.05 MiB/s [2024-10-16T09:33:19.813Z] 1853.20 IOPS, 7.24 MiB/s [2024-10-16T09:33:19.813Z] [2024-10-16 09:33:19.551654] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:55.409 [2024-10-16 09:33:19.551729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x218a2e0 with addr=10.0.0.3, port=4420 00:18:55.409 [2024-10-16 09:33:19.551744] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218a2e0 is same with the state(6) to be set 00:18:55.409 [2024-10-16 09:33:19.551766] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218a2e0 (9): Bad file descriptor 00:18:55.409 [2024-10-16 09:33:19.551783] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:55.409 [2024-10-16 09:33:19.551791] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:55.409 [2024-10-16 09:33:19.551801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:55.409 [2024-10-16 09:33:19.551825] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:55.409 [2024-10-16 09:33:19.551836] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:57.281 1544.33 IOPS, 6.03 MiB/s [2024-10-16T09:33:21.685Z] 1323.71 IOPS, 5.17 MiB/s [2024-10-16T09:33:21.685Z] [2024-10-16 09:33:21.551912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:57.281 [2024-10-16 09:33:21.551961] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:57.281 [2024-10-16 09:33:21.551971] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:57.281 [2024-10-16 09:33:21.551980] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:18:57.281 [2024-10-16 09:33:21.552002] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:58.218 1158.25 IOPS, 4.52 MiB/s 00:18:58.218 Latency(us) 00:18:58.218 [2024-10-16T09:33:22.622Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:58.218 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:58.218 Verification LBA range: start 0x0 length 0x4000 00:18:58.218 NVMe0n1 : 8.17 1134.43 4.43 15.67 0.00 111139.29 3470.43 7015926.69 00:18:58.218 [2024-10-16T09:33:22.622Z] =================================================================================================================== 00:18:58.218 [2024-10-16T09:33:22.622Z] Total : 1134.43 4.43 15.67 0.00 111139.29 3470.43 7015926.69 00:18:58.218 { 00:18:58.218 "results": [ 00:18:58.218 { 00:18:58.218 "job": "NVMe0n1", 00:18:58.218 "core_mask": "0x4", 00:18:58.218 "workload": "verify", 00:18:58.218 "status": "finished", 00:18:58.218 "verify_range": { 00:18:58.218 "start": 0, 00:18:58.218 "length": 16384 00:18:58.218 }, 00:18:58.218 "queue_depth": 128, 00:18:58.218 "io_size": 4096, 00:18:58.218 "runtime": 8.16801, 00:18:58.218 "iops": 1134.4256434553827, 00:18:58.218 "mibps": 4.4313501697475886, 00:18:58.218 "io_failed": 128, 00:18:58.218 "io_timeout": 0, 00:18:58.218 "avg_latency_us": 111139.28892639402, 00:18:58.218 "min_latency_us": 3470.429090909091, 00:18:58.218 "max_latency_us": 7015926.69090909 00:18:58.218 } 00:18:58.218 ], 00:18:58.218 "core_count": 1 00:18:58.218 } 00:18:58.786 09:33:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:18:58.786 09:33:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:58.786 09:33:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:18:59.051 09:33:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:18:59.051 09:33:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:18:59.051 09:33:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:18:59.051 09:33:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:18:59.324 09:33:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:18:59.324 09:33:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 81395 00:18:59.325 09:33:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 81380 00:18:59.325 09:33:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 81380 ']' 00:18:59.325 09:33:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 81380 00:18:59.325 09:33:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:18:59.325 09:33:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:59.325 09:33:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81380 00:18:59.325 killing process with pid 81380 00:18:59.325 Received shutdown signal, test time was about 9.299932 seconds 00:18:59.325 00:18:59.325 Latency(us) 00:18:59.325 [2024-10-16T09:33:23.729Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:59.325 [2024-10-16T09:33:23.729Z] =================================================================================================================== 00:18:59.325 [2024-10-16T09:33:23.729Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:59.325 09:33:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:59.325 09:33:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:59.325 09:33:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81380' 00:18:59.325 09:33:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 81380 00:18:59.325 09:33:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 81380 00:18:59.585 09:33:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:59.845 [2024-10-16 09:33:24.070953] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:59.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:59.845 09:33:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=81519 00:18:59.845 09:33:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:18:59.845 09:33:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 81519 /var/tmp/bdevperf.sock 00:18:59.845 09:33:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 81519 ']' 00:18:59.845 09:33:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:59.845 09:33:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:59.845 09:33:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:59.845 09:33:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:59.845 09:33:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:59.845 [2024-10-16 09:33:24.145228] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:18:59.845 [2024-10-16 09:33:24.145326] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81519 ] 00:19:00.104 [2024-10-16 09:33:24.279171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.104 [2024-10-16 09:33:24.327016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:00.104 [2024-10-16 09:33:24.379102] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:01.041 09:33:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:01.041 09:33:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:19:01.041 09:33:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:01.041 09:33:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:19:01.300 NVMe0n1 00:19:01.300 09:33:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=81537 00:19:01.300 09:33:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:01.300 09:33:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:19:01.559 Running I/O for 10 seconds... 00:19:02.499 09:33:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:02.499 7957.00 IOPS, 31.08 MiB/s [2024-10-16T09:33:26.903Z] [2024-10-16 09:33:26.834008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:02.499 [2024-10-16 09:33:26.834070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.499 [2024-10-16 09:33:26.834099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:02.499 [2024-10-16 09:33:26.834108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.499 [2024-10-16 09:33:26.834118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:02.499 [2024-10-16 09:33:26.834126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.499 [2024-10-16 09:33:26.834135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:02.499 [2024-10-16 09:33:26.834143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.499 [2024-10-16 09:33:26.834152] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bf2e0 is same with the state(6) to be set 00:19:02.499 [2024-10-16 09:33:26.834401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:69792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.499 [2024-10-16 09:33:26.834419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.499 [2024-10-16 09:33:26.834438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:69920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.499 [2024-10-16 09:33:26.834447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.499 [2024-10-16 09:33:26.834458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:69928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.499 [2024-10-16 09:33:26.834467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.499 [2024-10-16 09:33:26.834477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.499 [2024-10-16 09:33:26.834486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.499 [2024-10-16 09:33:26.834496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:69944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.499 [2024-10-16 09:33:26.834505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.499 [2024-10-16 09:33:26.834515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:69952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.499 [2024-10-16 09:33:26.834524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.499 [2024-10-16 09:33:26.834534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:69960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.499 [2024-10-16 09:33:26.834543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.499 [2024-10-16 09:33:26.834553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:69968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.499 [2024-10-16 09:33:26.834562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.499 [2024-10-16 09:33:26.834572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:69976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.499 [2024-10-16 09:33:26.834581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.499 [2024-10-16 09:33:26.834606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:69984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.499 [2024-10-16 09:33:26.834616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.499 [2024-10-16 09:33:26.834627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:69992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.499 [2024-10-16 09:33:26.834636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.499 [2024-10-16 09:33:26.834646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:70000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.499 [2024-10-16 09:33:26.834655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.499 [2024-10-16 09:33:26.834666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:70008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.499 [2024-10-16 09:33:26.834677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.499 [2024-10-16 09:33:26.834688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:70016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.499 [2024-10-16 09:33:26.834697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.499 [2024-10-16 09:33:26.834707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:70024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.499 [2024-10-16 09:33:26.834731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.499 [2024-10-16 09:33:26.834741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:70032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.499 [2024-10-16 09:33:26.834749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.499 [2024-10-16 09:33:26.834775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:70040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.499 [2024-10-16 09:33:26.834784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.499 [2024-10-16 09:33:26.834795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:70048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.499 [2024-10-16 09:33:26.834803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.499 [2024-10-16 09:33:26.834814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:70056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.499 [2024-10-16 09:33:26.834822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.499 [2024-10-16 09:33:26.834833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:70064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.500 [2024-10-16 09:33:26.834841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.500 [2024-10-16 09:33:26.834851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:70072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.500 [2024-10-16 09:33:26.834860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.500 [2024-10-16 09:33:26.834870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.500 [2024-10-16 09:33:26.834878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.500 [2024-10-16 09:33:26.834889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:70088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.500 [2024-10-16 09:33:26.834914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.500 [2024-10-16 09:33:26.834925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:70096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.500 [2024-10-16 09:33:26.834934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.500 [2024-10-16 09:33:26.834944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.500 [2024-10-16 09:33:26.834953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.500 [2024-10-16 09:33:26.834971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:70112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.500 [2024-10-16 09:33:26.834980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.500 [2024-10-16 09:33:26.834991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:70120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.500 [2024-10-16 09:33:26.835000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.500 [2024-10-16 09:33:26.835010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:70128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.500 [2024-10-16 09:33:26.835019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.500 [2024-10-16 09:33:26.835030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:70136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.500 [2024-10-16 09:33:26.835040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.500 [2024-10-16 09:33:26.835051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:70144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.500 [2024-10-16 09:33:26.835065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.500 [2024-10-16 09:33:26.835076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:70152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.500 [2024-10-16 09:33:26.835085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.500 [2024-10-16 09:33:26.835096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:70160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.500 [2024-10-16 09:33:26.835105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.500 [2024-10-16 09:33:26.835116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:70168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.500 [2024-10-16 09:33:26.835125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.500 [2024-10-16 09:33:26.835150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:70176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.500 [2024-10-16 09:33:26.835175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.500 [2024-10-16 09:33:26.835186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:70184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.500 [2024-10-16 09:33:26.835195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.500 [2024-10-16 09:33:26.835206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:70192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.500 [2024-10-16 09:33:26.835215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.500 [2024-10-16 09:33:26.835226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:70200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.500 [2024-10-16 09:33:26.835234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.500 [2024-10-16 09:33:26.835245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:70208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.500 [2024-10-16 09:33:26.835254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.500 [2024-10-16 09:33:26.835265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:70216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.500 [2024-10-16 09:33:26.835274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.500 [2024-10-16 09:33:26.835284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:70224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.500 [2024-10-16 09:33:26.835293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.500 [2024-10-16 09:33:26.835304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:70232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.500 [2024-10-16 09:33:26.835312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.500 [2024-10-16 09:33:26.835323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:70240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.500 [2024-10-16 09:33:26.835332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.500 [2024-10-16 09:33:26.835344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:70248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.500 [2024-10-16 09:33:26.835353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.500 [2024-10-16 09:33:26.835363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:70256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.500 [2024-10-16 09:33:26.835372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.500 [2024-10-16 09:33:26.835383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:70264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.500 [2024-10-16 09:33:26.835394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.500 [2024-10-16 09:33:26.835405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:70272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.500 [2024-10-16 09:33:26.835414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.500 [2024-10-16 09:33:26.835424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:70280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.500 [2024-10-16 09:33:26.835433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.500 [2024-10-16 09:33:26.835444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:70288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.500 [2024-10-16 09:33:26.835453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.500 [2024-10-16 09:33:26.835463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:70296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.500 [2024-10-16 09:33:26.835472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.500 [2024-10-16 09:33:26.835483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:70304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.500 [2024-10-16 09:33:26.835491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.500 [2024-10-16 09:33:26.835502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:70312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.500 [2024-10-16 09:33:26.835511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.500 [2024-10-16 09:33:26.835521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:70320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.500 [2024-10-16 09:33:26.835530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.500 [2024-10-16 09:33:26.835541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:70328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.500 [2024-10-16 09:33:26.835550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.500 [2024-10-16 09:33:26.835560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:70336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.500 [2024-10-16 09:33:26.835569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.500 [2024-10-16 09:33:26.835595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:70344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.500 [2024-10-16 09:33:26.835605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.500 [2024-10-16 09:33:26.835631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:70352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.500 [2024-10-16 09:33:26.835640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.500 [2024-10-16 09:33:26.835661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:70360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.500 [2024-10-16 09:33:26.835670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.500 [2024-10-16 09:33:26.835681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:70368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.500 [2024-10-16 09:33:26.835690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.500 [2024-10-16 09:33:26.835701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:70376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.500 [2024-10-16 09:33:26.835709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.500 [2024-10-16 09:33:26.835720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:70384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.500 [2024-10-16 09:33:26.835729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.500 [2024-10-16 09:33:26.835739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:70392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.500 [2024-10-16 09:33:26.835765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.500 [2024-10-16 09:33:26.835776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:70400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.501 [2024-10-16 09:33:26.835785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.501 [2024-10-16 09:33:26.835797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:70408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.501 [2024-10-16 09:33:26.835806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.501 [2024-10-16 09:33:26.835817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:70416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.501 [2024-10-16 09:33:26.835826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.501 [2024-10-16 09:33:26.835837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:70424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.501 [2024-10-16 09:33:26.835846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.501 [2024-10-16 09:33:26.835857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:70432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.501 [2024-10-16 09:33:26.835865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.501 [2024-10-16 09:33:26.835876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:70440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.501 [2024-10-16 09:33:26.835886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.501 [2024-10-16 09:33:26.835897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:70448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.501 [2024-10-16 09:33:26.835906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.501 [2024-10-16 09:33:26.835917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:70456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.501 [2024-10-16 09:33:26.835927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.501 [2024-10-16 09:33:26.835938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:70464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.501 [2024-10-16 09:33:26.835962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.501 [2024-10-16 09:33:26.835972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:70472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.501 [2024-10-16 09:33:26.835981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.501 [2024-10-16 09:33:26.835992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:70480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.501 [2024-10-16 09:33:26.836001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.501 [2024-10-16 09:33:26.836011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:70488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.501 [2024-10-16 09:33:26.836020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.501 [2024-10-16 09:33:26.836031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:70496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.501 [2024-10-16 09:33:26.836040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.501 [2024-10-16 09:33:26.836051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:70504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.501 [2024-10-16 09:33:26.836060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.501 [2024-10-16 09:33:26.836071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:70512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.501 [2024-10-16 09:33:26.836080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.501 [2024-10-16 09:33:26.836090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:70520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.501 [2024-10-16 09:33:26.836101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.501 [2024-10-16 09:33:26.836113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:70528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.501 [2024-10-16 09:33:26.836122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.501 [2024-10-16 09:33:26.836133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:70536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.501 [2024-10-16 09:33:26.836142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.501 [2024-10-16 09:33:26.836153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:70544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.501 [2024-10-16 09:33:26.836176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.501 [2024-10-16 09:33:26.836187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:70552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.501 [2024-10-16 09:33:26.836195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.501 [2024-10-16 09:33:26.836221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.501 [2024-10-16 09:33:26.836230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.501 [2024-10-16 09:33:26.836241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:70568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.501 [2024-10-16 09:33:26.836249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.501 [2024-10-16 09:33:26.836260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:70576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.501 [2024-10-16 09:33:26.836269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.501 [2024-10-16 09:33:26.836279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:70584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.501 [2024-10-16 09:33:26.836288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.501 [2024-10-16 09:33:26.836299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:70592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.501 [2024-10-16 09:33:26.836307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.501 [2024-10-16 09:33:26.836318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:70600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.501 [2024-10-16 09:33:26.836326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.501 [2024-10-16 09:33:26.836338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:70608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.501 [2024-10-16 09:33:26.836346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.501 [2024-10-16 09:33:26.836357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:70616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.501 [2024-10-16 09:33:26.836366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.501 [2024-10-16 09:33:26.836377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:70624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.501 [2024-10-16 09:33:26.836385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.501 [2024-10-16 09:33:26.836396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:70632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.501 [2024-10-16 09:33:26.836405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.501 [2024-10-16 09:33:26.836415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:70640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.501 [2024-10-16 09:33:26.836424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.501 [2024-10-16 09:33:26.836435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:70648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.501 [2024-10-16 09:33:26.836444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.501 [2024-10-16 09:33:26.836455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:70656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.501 [2024-10-16 09:33:26.836464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.501 [2024-10-16 09:33:26.836474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:70664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.501 [2024-10-16 09:33:26.836493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.501 [2024-10-16 09:33:26.836505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:70672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.501 [2024-10-16 09:33:26.836514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.501 [2024-10-16 09:33:26.836525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:70680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.501 [2024-10-16 09:33:26.836534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.501 [2024-10-16 09:33:26.836545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:70688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.501 [2024-10-16 09:33:26.836554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.501 [2024-10-16 09:33:26.836564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:70696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.501 [2024-10-16 09:33:26.836573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.501 [2024-10-16 09:33:26.836583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:70704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.501 [2024-10-16 09:33:26.836592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.501 [2024-10-16 09:33:26.836614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:70712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.501 [2024-10-16 09:33:26.836624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.501 [2024-10-16 09:33:26.836635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:70720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.501 [2024-10-16 09:33:26.836644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.501 [2024-10-16 09:33:26.836654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:70728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.501 [2024-10-16 09:33:26.836664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.501 [2024-10-16 09:33:26.836682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:70736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.501 [2024-10-16 09:33:26.836691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.502 [2024-10-16 09:33:26.836702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:70744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.502 [2024-10-16 09:33:26.836711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.502 [2024-10-16 09:33:26.836722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:70752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.502 [2024-10-16 09:33:26.836731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.502 [2024-10-16 09:33:26.836741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:70760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.502 [2024-10-16 09:33:26.836750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.502 [2024-10-16 09:33:26.836760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:70768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.502 [2024-10-16 09:33:26.836770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.502 [2024-10-16 09:33:26.836780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:70776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.502 [2024-10-16 09:33:26.836789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.502 [2024-10-16 09:33:26.836800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:70784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.502 [2024-10-16 09:33:26.836809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.502 [2024-10-16 09:33:26.836820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:70792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.502 [2024-10-16 09:33:26.836834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.502 [2024-10-16 09:33:26.836845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:69800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.502 [2024-10-16 09:33:26.836854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.502 [2024-10-16 09:33:26.836865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:69808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.502 [2024-10-16 09:33:26.836874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.502 [2024-10-16 09:33:26.836884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:69816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.502 [2024-10-16 09:33:26.836893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.502 [2024-10-16 09:33:26.836904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:69824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.502 [2024-10-16 09:33:26.836912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.502 [2024-10-16 09:33:26.836923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:69832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.502 [2024-10-16 09:33:26.836932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.502 [2024-10-16 09:33:26.836943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:69840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.502 [2024-10-16 09:33:26.836951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.502 [2024-10-16 09:33:26.836962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:69848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.502 [2024-10-16 09:33:26.836971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.502 [2024-10-16 09:33:26.837007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.502 [2024-10-16 09:33:26.837018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.502 [2024-10-16 09:33:26.837034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:69864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.502 [2024-10-16 09:33:26.837043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.502 [2024-10-16 09:33:26.837054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:69872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.502 [2024-10-16 09:33:26.837064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.502 [2024-10-16 09:33:26.837075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:69880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.502 [2024-10-16 09:33:26.837084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.502 [2024-10-16 09:33:26.837095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:69888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.502 [2024-10-16 09:33:26.837105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.502 [2024-10-16 09:33:26.837116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.502 [2024-10-16 09:33:26.837125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.502 [2024-10-16 09:33:26.837136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:69904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.502 [2024-10-16 09:33:26.837145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.502 [2024-10-16 09:33:26.837156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:69912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:02.502 [2024-10-16 09:33:26.837165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.502 [2024-10-16 09:33:26.837176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:70800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:02.502 [2024-10-16 09:33:26.837190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.502 [2024-10-16 09:33:26.837200] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242d530 is same with the state(6) to be set 00:19:02.502 [2024-10-16 09:33:26.837212] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:02.502 [2024-10-16 09:33:26.837220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:02.502 [2024-10-16 09:33:26.837228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70808 len:8 PRP1 0x0 PRP2 0x0 00:19:02.502 [2024-10-16 09:33:26.837237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:02.502 [2024-10-16 09:33:26.838196] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x242d530 was disconnected and freed. reset controller. 00:19:02.502 [2024-10-16 09:33:26.838484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:02.502 [2024-10-16 09:33:26.838508] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bf2e0 (9): Bad file descriptor 00:19:02.502 [2024-10-16 09:33:26.838638] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:02.502 [2024-10-16 09:33:26.838661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bf2e0 with addr=10.0.0.3, port=4420 00:19:02.502 [2024-10-16 09:33:26.838672] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bf2e0 is same with the state(6) to be set 00:19:02.502 [2024-10-16 09:33:26.838690] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bf2e0 (9): Bad file descriptor 00:19:02.502 [2024-10-16 09:33:26.838706] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:02.502 [2024-10-16 09:33:26.838716] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:02.502 [2024-10-16 09:33:26.838725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:02.502 [2024-10-16 09:33:26.838745] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:02.502 [2024-10-16 09:33:26.838761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:02.502 09:33:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:19:03.437 4362.00 IOPS, 17.04 MiB/s [2024-10-16T09:33:27.841Z] [2024-10-16 09:33:27.838849] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:03.437 [2024-10-16 09:33:27.838924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bf2e0 with addr=10.0.0.3, port=4420 00:19:03.437 [2024-10-16 09:33:27.838938] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bf2e0 is same with the state(6) to be set 00:19:03.437 [2024-10-16 09:33:27.838956] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bf2e0 (9): Bad file descriptor 00:19:03.437 [2024-10-16 09:33:27.838987] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:03.437 [2024-10-16 09:33:27.838995] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:03.437 [2024-10-16 09:33:27.839005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:03.437 [2024-10-16 09:33:27.839026] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:03.437 [2024-10-16 09:33:27.839036] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:03.696 09:33:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:03.696 [2024-10-16 09:33:28.071999] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:03.696 09:33:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 81537 00:19:04.632 2908.00 IOPS, 11.36 MiB/s [2024-10-16T09:33:29.036Z] [2024-10-16 09:33:28.852534] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:06.504 2181.00 IOPS, 8.52 MiB/s [2024-10-16T09:33:31.845Z] 3699.00 IOPS, 14.45 MiB/s [2024-10-16T09:33:32.806Z] 4889.67 IOPS, 19.10 MiB/s [2024-10-16T09:33:34.183Z] 5740.29 IOPS, 22.42 MiB/s [2024-10-16T09:33:34.751Z] 6387.25 IOPS, 24.95 MiB/s [2024-10-16T09:33:36.127Z] 6888.44 IOPS, 26.91 MiB/s [2024-10-16T09:33:36.127Z] 7290.50 IOPS, 28.48 MiB/s 00:19:11.723 Latency(us) 00:19:11.723 [2024-10-16T09:33:36.127Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:11.723 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:11.723 Verification LBA range: start 0x0 length 0x4000 00:19:11.723 NVMe0n1 : 10.01 7295.99 28.50 0.00 0.00 17509.82 1467.11 3019898.88 00:19:11.723 [2024-10-16T09:33:36.128Z] =================================================================================================================== 00:19:11.724 [2024-10-16T09:33:36.128Z] Total : 7295.99 28.50 0.00 0.00 17509.82 1467.11 3019898.88 00:19:11.724 { 00:19:11.724 "results": [ 00:19:11.724 { 00:19:11.724 "job": "NVMe0n1", 00:19:11.724 "core_mask": "0x4", 00:19:11.724 "workload": "verify", 00:19:11.724 "status": "finished", 00:19:11.724 "verify_range": { 00:19:11.724 "start": 0, 00:19:11.724 "length": 16384 00:19:11.724 }, 00:19:11.724 "queue_depth": 128, 00:19:11.724 "io_size": 4096, 00:19:11.724 "runtime": 10.010021, 00:19:11.724 "iops": 7295.9886897340175, 00:19:11.724 "mibps": 28.499955819273506, 00:19:11.724 "io_failed": 0, 00:19:11.724 "io_timeout": 0, 00:19:11.724 "avg_latency_us": 17509.82006350803, 00:19:11.724 "min_latency_us": 1467.1127272727272, 00:19:11.724 "max_latency_us": 3019898.88 00:19:11.724 } 00:19:11.724 ], 00:19:11.724 "core_count": 1 00:19:11.724 } 00:19:11.724 09:33:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=81646 00:19:11.724 09:33:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:11.724 09:33:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:19:11.724 Running I/O for 10 seconds... 00:19:12.664 09:33:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:12.664 7957.00 IOPS, 31.08 MiB/s [2024-10-16T09:33:37.068Z] [2024-10-16 09:33:37.050911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.050976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.664 [2024-10-16 09:33:37.051607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.665 [2024-10-16 09:33:37.051614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.665 [2024-10-16 09:33:37.051621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.665 [2024-10-16 09:33:37.051630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.665 [2024-10-16 09:33:37.051653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.665 [2024-10-16 09:33:37.051660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.665 [2024-10-16 09:33:37.051667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.665 [2024-10-16 09:33:37.051675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.665 [2024-10-16 09:33:37.051683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.665 [2024-10-16 09:33:37.051690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.665 [2024-10-16 09:33:37.051698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.665 [2024-10-16 09:33:37.051705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.665 [2024-10-16 09:33:37.051713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.665 [2024-10-16 09:33:37.051720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.665 [2024-10-16 09:33:37.051727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.665 [2024-10-16 09:33:37.051741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.665 [2024-10-16 09:33:37.051749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.665 [2024-10-16 09:33:37.051756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.665 [2024-10-16 09:33:37.051764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.665 [2024-10-16 09:33:37.051772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.665 [2024-10-16 09:33:37.051780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.665 [2024-10-16 09:33:37.051787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.665 [2024-10-16 09:33:37.051794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.665 [2024-10-16 09:33:37.051802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.665 [2024-10-16 09:33:37.051809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.665 [2024-10-16 09:33:37.051817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.665 [2024-10-16 09:33:37.051824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.665 [2024-10-16 09:33:37.051831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.665 [2024-10-16 09:33:37.051838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.665 [2024-10-16 09:33:37.051846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10033e0 is same with the state(6) to be set 00:19:12.665 [2024-10-16 09:33:37.051900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:72992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.665 [2024-10-16 09:33:37.051944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.665 [2024-10-16 09:33:37.051963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:73000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.665 [2024-10-16 09:33:37.051989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.665 [2024-10-16 09:33:37.052016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:73008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.665 [2024-10-16 09:33:37.052024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.665 [2024-10-16 09:33:37.052035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:73016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.665 [2024-10-16 09:33:37.052044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.665 [2024-10-16 09:33:37.052055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:73024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.665 [2024-10-16 09:33:37.052063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.665 [2024-10-16 09:33:37.052074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:73032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.665 [2024-10-16 09:33:37.052099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.665 [2024-10-16 09:33:37.052110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:73040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.665 [2024-10-16 09:33:37.052119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.665 [2024-10-16 09:33:37.052130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:73048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.665 [2024-10-16 09:33:37.052139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.665 [2024-10-16 09:33:37.052150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:73056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.665 [2024-10-16 09:33:37.052158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.665 [2024-10-16 09:33:37.052169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:73064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.665 [2024-10-16 09:33:37.052178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.665 [2024-10-16 09:33:37.052189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:73072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.665 [2024-10-16 09:33:37.052198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.665 [2024-10-16 09:33:37.052217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:73080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.665 [2024-10-16 09:33:37.052226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.665 [2024-10-16 09:33:37.052244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:73088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.665 [2024-10-16 09:33:37.052259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.665 [2024-10-16 09:33:37.052270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:73096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.665 [2024-10-16 09:33:37.052279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.665 [2024-10-16 09:33:37.052290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:73104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.665 [2024-10-16 09:33:37.052299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.665 [2024-10-16 09:33:37.052310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:73112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.665 [2024-10-16 09:33:37.052319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.665 [2024-10-16 09:33:37.052330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:73120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.665 [2024-10-16 09:33:37.052345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.665 [2024-10-16 09:33:37.052356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:73128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.665 [2024-10-16 09:33:37.052365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.665 [2024-10-16 09:33:37.052376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:73136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.665 [2024-10-16 09:33:37.052385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.665 [2024-10-16 09:33:37.052396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:73144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.665 [2024-10-16 09:33:37.052406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.665 [2024-10-16 09:33:37.052417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:73152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.665 [2024-10-16 09:33:37.052426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.665 [2024-10-16 09:33:37.052437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:73160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.665 [2024-10-16 09:33:37.052446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.665 [2024-10-16 09:33:37.052456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:73168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.665 [2024-10-16 09:33:37.052465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.665 [2024-10-16 09:33:37.052476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:73176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.665 [2024-10-16 09:33:37.052485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.665 [2024-10-16 09:33:37.052496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.665 [2024-10-16 09:33:37.052504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.665 [2024-10-16 09:33:37.052515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:73192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.665 [2024-10-16 09:33:37.052525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.665 [2024-10-16 09:33:37.052535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:73200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.665 [2024-10-16 09:33:37.052544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.665 [2024-10-16 09:33:37.052563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:73208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.665 [2024-10-16 09:33:37.052572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.665 [2024-10-16 09:33:37.052583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:73216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.666 [2024-10-16 09:33:37.052592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.666 [2024-10-16 09:33:37.052603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:73224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.666 [2024-10-16 09:33:37.052612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.666 [2024-10-16 09:33:37.052637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:73232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.666 [2024-10-16 09:33:37.052648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.666 [2024-10-16 09:33:37.052658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:73240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.666 [2024-10-16 09:33:37.052667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.666 [2024-10-16 09:33:37.052678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:73248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.666 [2024-10-16 09:33:37.052687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.666 [2024-10-16 09:33:37.052699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:73256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.666 [2024-10-16 09:33:37.052708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.666 [2024-10-16 09:33:37.052718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:73264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.666 [2024-10-16 09:33:37.052727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.666 [2024-10-16 09:33:37.052738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:73272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.666 [2024-10-16 09:33:37.052747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.666 [2024-10-16 09:33:37.052757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:73280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.666 [2024-10-16 09:33:37.052766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.666 [2024-10-16 09:33:37.052777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:73288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.666 [2024-10-16 09:33:37.052786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.666 [2024-10-16 09:33:37.052796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:73296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.666 [2024-10-16 09:33:37.052805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.666 [2024-10-16 09:33:37.052815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:73304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.666 [2024-10-16 09:33:37.052824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.666 [2024-10-16 09:33:37.052835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:73312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.666 [2024-10-16 09:33:37.052844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.666 [2024-10-16 09:33:37.052854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:73320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.666 [2024-10-16 09:33:37.052863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.666 [2024-10-16 09:33:37.052873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:73328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.666 [2024-10-16 09:33:37.052882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.666 [2024-10-16 09:33:37.052898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:73336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.666 [2024-10-16 09:33:37.052907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.666 [2024-10-16 09:33:37.052918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:73344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.666 [2024-10-16 09:33:37.052927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.666 [2024-10-16 09:33:37.052938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:73352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.666 [2024-10-16 09:33:37.052947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.666 [2024-10-16 09:33:37.052959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:73360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.666 [2024-10-16 09:33:37.052967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.666 [2024-10-16 09:33:37.053004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:73368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.666 [2024-10-16 09:33:37.053014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.666 [2024-10-16 09:33:37.053025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:73376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.666 [2024-10-16 09:33:37.053035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.666 [2024-10-16 09:33:37.053046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:73384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.666 [2024-10-16 09:33:37.053055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.666 [2024-10-16 09:33:37.053066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:73392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.666 [2024-10-16 09:33:37.053076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.666 [2024-10-16 09:33:37.053087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:73400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.666 [2024-10-16 09:33:37.053096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.666 [2024-10-16 09:33:37.053107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:73408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.666 [2024-10-16 09:33:37.053116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.666 [2024-10-16 09:33:37.053128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:73416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.666 [2024-10-16 09:33:37.053137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.666 [2024-10-16 09:33:37.053148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:73424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.666 [2024-10-16 09:33:37.053158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.666 [2024-10-16 09:33:37.053169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:73432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.666 [2024-10-16 09:33:37.053178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.666 [2024-10-16 09:33:37.053189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:73440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.666 [2024-10-16 09:33:37.053198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.666 [2024-10-16 09:33:37.053209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:73448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.666 [2024-10-16 09:33:37.053218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.666 [2024-10-16 09:33:37.053229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:73456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.666 [2024-10-16 09:33:37.053238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.666 [2024-10-16 09:33:37.053253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:73464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.666 [2024-10-16 09:33:37.053263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.666 [2024-10-16 09:33:37.053274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:73472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.666 [2024-10-16 09:33:37.053283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.666 [2024-10-16 09:33:37.053295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:73480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.666 [2024-10-16 09:33:37.053304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.666 [2024-10-16 09:33:37.053315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:73488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.666 [2024-10-16 09:33:37.053324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.666 [2024-10-16 09:33:37.053335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:73496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.666 [2024-10-16 09:33:37.053344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.666 [2024-10-16 09:33:37.053355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:73504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.666 [2024-10-16 09:33:37.053365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.666 [2024-10-16 09:33:37.053376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:73512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.666 [2024-10-16 09:33:37.053399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.666 [2024-10-16 09:33:37.053410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:73520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.666 [2024-10-16 09:33:37.053419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.666 [2024-10-16 09:33:37.053430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:73528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.666 [2024-10-16 09:33:37.053439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.666 [2024-10-16 09:33:37.053450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:73536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.666 [2024-10-16 09:33:37.053460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.666 [2024-10-16 09:33:37.053471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:73544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.666 [2024-10-16 09:33:37.053480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.666 [2024-10-16 09:33:37.053491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:73552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.667 [2024-10-16 09:33:37.053500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.667 [2024-10-16 09:33:37.053510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:73560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.667 [2024-10-16 09:33:37.053519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.667 [2024-10-16 09:33:37.053530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:73568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.667 [2024-10-16 09:33:37.053539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.667 [2024-10-16 09:33:37.053550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:73576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.667 [2024-10-16 09:33:37.053558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.667 [2024-10-16 09:33:37.053577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:73584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.667 [2024-10-16 09:33:37.053588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.667 [2024-10-16 09:33:37.053604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:73592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.667 [2024-10-16 09:33:37.053613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.667 [2024-10-16 09:33:37.053623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:73600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.667 [2024-10-16 09:33:37.053632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.667 [2024-10-16 09:33:37.053643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:73608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.667 [2024-10-16 09:33:37.053652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.667 [2024-10-16 09:33:37.053663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:73616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.667 [2024-10-16 09:33:37.053671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.667 [2024-10-16 09:33:37.053682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:73624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.667 [2024-10-16 09:33:37.053691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.667 [2024-10-16 09:33:37.053702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:73632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.667 [2024-10-16 09:33:37.053712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.667 [2024-10-16 09:33:37.053723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:73640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.667 [2024-10-16 09:33:37.053731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.667 [2024-10-16 09:33:37.053742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:73648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.667 [2024-10-16 09:33:37.053752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.667 [2024-10-16 09:33:37.053762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:73656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.667 [2024-10-16 09:33:37.053771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.667 [2024-10-16 09:33:37.053782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:73664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.667 [2024-10-16 09:33:37.053791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.667 [2024-10-16 09:33:37.053802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:73672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.667 [2024-10-16 09:33:37.053811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.667 [2024-10-16 09:33:37.053822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:73680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.667 [2024-10-16 09:33:37.053830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.667 [2024-10-16 09:33:37.053840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:73688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.667 [2024-10-16 09:33:37.053849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.667 [2024-10-16 09:33:37.053860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:73696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.667 [2024-10-16 09:33:37.053869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.667 [2024-10-16 09:33:37.053879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:73704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.667 [2024-10-16 09:33:37.053888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.667 [2024-10-16 09:33:37.053898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:73712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.667 [2024-10-16 09:33:37.053908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.667 [2024-10-16 09:33:37.053930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:73720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.667 [2024-10-16 09:33:37.053939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.667 [2024-10-16 09:33:37.053950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:73728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.667 [2024-10-16 09:33:37.053959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.667 [2024-10-16 09:33:37.053969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:73736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.667 [2024-10-16 09:33:37.053978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.667 [2024-10-16 09:33:37.053988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:73744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.667 [2024-10-16 09:33:37.053997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.667 [2024-10-16 09:33:37.054008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:73752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.667 [2024-10-16 09:33:37.054017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.667 [2024-10-16 09:33:37.054027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:73760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.667 [2024-10-16 09:33:37.054037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.667 [2024-10-16 09:33:37.054047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:73768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.667 [2024-10-16 09:33:37.054057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.667 [2024-10-16 09:33:37.054067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:73776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.667 [2024-10-16 09:33:37.054076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.667 [2024-10-16 09:33:37.054087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:73784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.667 [2024-10-16 09:33:37.054097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.667 [2024-10-16 09:33:37.054107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:73792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.667 [2024-10-16 09:33:37.054116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.667 [2024-10-16 09:33:37.054127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:73800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.667 [2024-10-16 09:33:37.054136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.667 [2024-10-16 09:33:37.054146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:73808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.667 [2024-10-16 09:33:37.054155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.667 [2024-10-16 09:33:37.054165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:73816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.667 [2024-10-16 09:33:37.054174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.667 [2024-10-16 09:33:37.054185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:73824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.667 [2024-10-16 09:33:37.054194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.667 [2024-10-16 09:33:37.054204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:73832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.667 [2024-10-16 09:33:37.054213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.667 [2024-10-16 09:33:37.054224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.667 [2024-10-16 09:33:37.054233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.667 [2024-10-16 09:33:37.054249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:73848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.667 [2024-10-16 09:33:37.054258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.667 [2024-10-16 09:33:37.054268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:73856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.667 [2024-10-16 09:33:37.054277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.667 [2024-10-16 09:33:37.054288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:73864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.667 [2024-10-16 09:33:37.054297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.667 [2024-10-16 09:33:37.054308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:73872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.667 [2024-10-16 09:33:37.054316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.667 [2024-10-16 09:33:37.054327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:73896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.667 [2024-10-16 09:33:37.054336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.668 [2024-10-16 09:33:37.054347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:73904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.668 [2024-10-16 09:33:37.054356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.668 [2024-10-16 09:33:37.054367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:73912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.668 [2024-10-16 09:33:37.054376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.668 [2024-10-16 09:33:37.054386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:73920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.668 [2024-10-16 09:33:37.054400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.668 [2024-10-16 09:33:37.054411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:73928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.668 [2024-10-16 09:33:37.054419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.668 [2024-10-16 09:33:37.054430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:73936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.668 [2024-10-16 09:33:37.054439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.668 [2024-10-16 09:33:37.054449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:73944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.668 [2024-10-16 09:33:37.054458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.668 [2024-10-16 09:33:37.054468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.668 [2024-10-16 09:33:37.054477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.668 [2024-10-16 09:33:37.054488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:73960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.668 [2024-10-16 09:33:37.054497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.668 [2024-10-16 09:33:37.054507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:73968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.668 [2024-10-16 09:33:37.054516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.668 [2024-10-16 09:33:37.054527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:73976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.668 [2024-10-16 09:33:37.054535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.668 [2024-10-16 09:33:37.054556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:73984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.668 [2024-10-16 09:33:37.054565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.668 [2024-10-16 09:33:37.054580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:73992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.668 [2024-10-16 09:33:37.054589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.668 [2024-10-16 09:33:37.054600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:74000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.668 [2024-10-16 09:33:37.054609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.668 [2024-10-16 09:33:37.054620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:74008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.668 [2024-10-16 09:33:37.054629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.668 [2024-10-16 09:33:37.054639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:73880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.668 [2024-10-16 09:33:37.054648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.668 [2024-10-16 09:33:37.054658] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242f6d0 is same with the state(6) to be set 00:19:12.668 [2024-10-16 09:33:37.054669] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:12.668 [2024-10-16 09:33:37.054677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:12.668 [2024-10-16 09:33:37.054685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73888 len:8 PRP1 0x0 PRP2 0x0 00:19:12.668 [2024-10-16 09:33:37.054694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.668 [2024-10-16 09:33:37.055644] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x242f6d0 was disconnected and freed. reset controller. 00:19:12.668 [2024-10-16 09:33:37.055742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:12.668 [2024-10-16 09:33:37.055759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.668 [2024-10-16 09:33:37.055770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:12.668 [2024-10-16 09:33:37.055779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.668 [2024-10-16 09:33:37.055789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:12.668 [2024-10-16 09:33:37.055798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.668 [2024-10-16 09:33:37.055808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:12.668 [2024-10-16 09:33:37.055816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.668 [2024-10-16 09:33:37.055825] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bf2e0 is same with the state(6) to be set 00:19:12.668 [2024-10-16 09:33:37.056032] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:12.668 [2024-10-16 09:33:37.056053] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bf2e0 (9): Bad file descriptor 00:19:12.668 [2024-10-16 09:33:37.056142] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:12.668 [2024-10-16 09:33:37.056163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bf2e0 with addr=10.0.0.3, port=4420 00:19:12.668 [2024-10-16 09:33:37.056174] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bf2e0 is same with the state(6) to be set 00:19:12.668 [2024-10-16 09:33:37.056201] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bf2e0 (9): Bad file descriptor 00:19:12.668 [2024-10-16 09:33:37.056217] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:12.668 [2024-10-16 09:33:37.056226] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:12.668 [2024-10-16 09:33:37.056237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:12.668 [2024-10-16 09:33:37.056257] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:12.928 [2024-10-16 09:33:37.067857] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:12.928 09:33:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:19:13.864 4562.00 IOPS, 17.82 MiB/s [2024-10-16T09:33:38.268Z] [2024-10-16 09:33:38.067995] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:13.864 [2024-10-16 09:33:38.068052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bf2e0 with addr=10.0.0.3, port=4420 00:19:13.864 [2024-10-16 09:33:38.068066] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bf2e0 is same with the state(6) to be set 00:19:13.864 [2024-10-16 09:33:38.068084] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bf2e0 (9): Bad file descriptor 00:19:13.864 [2024-10-16 09:33:38.068101] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:13.864 [2024-10-16 09:33:38.068109] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:13.864 [2024-10-16 09:33:38.068119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:13.864 [2024-10-16 09:33:38.068139] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:13.864 [2024-10-16 09:33:38.068149] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:14.801 3041.33 IOPS, 11.88 MiB/s [2024-10-16T09:33:39.205Z] [2024-10-16 09:33:39.068224] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:14.801 [2024-10-16 09:33:39.068279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bf2e0 with addr=10.0.0.3, port=4420 00:19:14.801 [2024-10-16 09:33:39.068292] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bf2e0 is same with the state(6) to be set 00:19:14.801 [2024-10-16 09:33:39.068310] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bf2e0 (9): Bad file descriptor 00:19:14.801 [2024-10-16 09:33:39.068324] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:14.801 [2024-10-16 09:33:39.068333] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:14.801 [2024-10-16 09:33:39.068341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:14.801 [2024-10-16 09:33:39.068359] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:14.801 [2024-10-16 09:33:39.068368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:15.738 2281.00 IOPS, 8.91 MiB/s [2024-10-16T09:33:40.142Z] [2024-10-16 09:33:40.068703] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:15.738 [2024-10-16 09:33:40.068756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23bf2e0 with addr=10.0.0.3, port=4420 00:19:15.738 [2024-10-16 09:33:40.068769] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bf2e0 is same with the state(6) to be set 00:19:15.738 [2024-10-16 09:33:40.069002] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bf2e0 (9): Bad file descriptor 00:19:15.738 [2024-10-16 09:33:40.069215] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:15.738 [2024-10-16 09:33:40.069227] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:15.738 [2024-10-16 09:33:40.069235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:15.738 [2024-10-16 09:33:40.072725] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:15.738 [2024-10-16 09:33:40.072752] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:15.738 09:33:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:15.997 [2024-10-16 09:33:40.313352] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:15.997 09:33:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 81646 00:19:16.833 1824.80 IOPS, 7.13 MiB/s [2024-10-16T09:33:41.237Z] [2024-10-16 09:33:41.110284] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:18.708 2931.17 IOPS, 11.45 MiB/s [2024-10-16T09:33:44.049Z] 4065.29 IOPS, 15.88 MiB/s [2024-10-16T09:33:44.986Z] 4927.38 IOPS, 19.25 MiB/s [2024-10-16T09:33:45.924Z] 5587.00 IOPS, 21.82 MiB/s [2024-10-16T09:33:45.924Z] 6114.70 IOPS, 23.89 MiB/s 00:19:21.520 Latency(us) 00:19:21.520 [2024-10-16T09:33:45.924Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:21.520 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:21.520 Verification LBA range: start 0x0 length 0x4000 00:19:21.520 NVMe0n1 : 10.01 6122.02 23.91 4226.07 0.00 12339.52 554.82 3019898.88 00:19:21.520 [2024-10-16T09:33:45.924Z] =================================================================================================================== 00:19:21.520 [2024-10-16T09:33:45.925Z] Total : 6122.02 23.91 4226.07 0.00 12339.52 0.00 3019898.88 00:19:21.521 { 00:19:21.521 "results": [ 00:19:21.521 { 00:19:21.521 "job": "NVMe0n1", 00:19:21.521 "core_mask": "0x4", 00:19:21.521 "workload": "verify", 00:19:21.521 "status": "finished", 00:19:21.521 "verify_range": { 00:19:21.521 "start": 0, 00:19:21.521 "length": 16384 00:19:21.521 }, 00:19:21.521 "queue_depth": 128, 00:19:21.521 "io_size": 4096, 00:19:21.521 "runtime": 10.007652, 00:19:21.521 "iops": 6122.015433790064, 00:19:21.521 "mibps": 23.914122788242437, 00:19:21.521 "io_failed": 42293, 00:19:21.521 "io_timeout": 0, 00:19:21.521 "avg_latency_us": 12339.524919273852, 00:19:21.521 "min_latency_us": 554.8218181818182, 00:19:21.521 "max_latency_us": 3019898.88 00:19:21.521 } 00:19:21.521 ], 00:19:21.521 "core_count": 1 00:19:21.521 } 00:19:21.781 09:33:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 81519 00:19:21.781 09:33:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 81519 ']' 00:19:21.781 09:33:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 81519 00:19:21.781 09:33:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:19:21.781 09:33:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:21.781 09:33:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81519 00:19:21.781 killing process with pid 81519 00:19:21.781 Received shutdown signal, test time was about 10.000000 seconds 00:19:21.781 00:19:21.781 Latency(us) 00:19:21.781 [2024-10-16T09:33:46.185Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:21.781 [2024-10-16T09:33:46.185Z] =================================================================================================================== 00:19:21.781 [2024-10-16T09:33:46.185Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:21.781 09:33:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:21.781 09:33:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:21.781 09:33:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81519' 00:19:21.781 09:33:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 81519 00:19:21.781 09:33:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 81519 00:19:21.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:21.781 09:33:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=81756 00:19:21.781 09:33:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:19:21.781 09:33:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 81756 /var/tmp/bdevperf.sock 00:19:21.781 09:33:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 81756 ']' 00:19:21.781 09:33:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:21.781 09:33:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:21.781 09:33:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:21.781 09:33:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:21.781 09:33:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:22.041 [2024-10-16 09:33:46.200876] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:19:22.041 [2024-10-16 09:33:46.201868] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81756 ] 00:19:22.041 [2024-10-16 09:33:46.341238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.041 [2024-10-16 09:33:46.395226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:22.300 [2024-10-16 09:33:46.450024] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:22.300 09:33:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:22.300 09:33:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:19:22.300 09:33:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=81769 00:19:22.300 09:33:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81756 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:19:22.300 09:33:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:19:22.559 09:33:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:22.818 NVMe0n1 00:19:22.819 09:33:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=81806 00:19:22.819 09:33:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:22.819 09:33:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:19:23.077 Running I/O for 10 seconds... 00:19:24.013 09:33:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:24.277 17399.00 IOPS, 67.96 MiB/s [2024-10-16T09:33:48.681Z] [2024-10-16 09:33:48.437784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.277 [2024-10-16 09:33:48.437830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.277 [2024-10-16 09:33:48.437858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.277 [2024-10-16 09:33:48.437866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.277 [2024-10-16 09:33:48.437874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.277 [2024-10-16 09:33:48.437867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:24.277 [2024-10-16 09:33:48.437882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.277 [2024-10-16 09:33:48.437890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.277 [2024-10-16 09:33:48.437895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.277 [2024-10-16 09:33:48.437898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.277 [2024-10-16 09:33:48.437921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.277 [2024-10-16 09:33:48.437923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:24.277 [2024-10-16 09:33:48.437928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.277 [2024-10-16 09:33:48.437932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.277 [2024-10-16 09:33:48.437937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.277 [2024-10-16 09:33:48.437941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:24.277 [2024-10-16 09:33:48.437945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.277 [2024-10-16 09:33:48.437950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.277 [2024-10-16 09:33:48.437952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.277 [2024-10-16 09:33:48.437960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-10-16 09:33:48.437960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with id:0 cdw10:00000000 cdw11:00000000 00:19:24.277 the state(6) to be set 00:19:24.277 [2024-10-16 09:33:48.437968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with [2024-10-16 09:33:48.437969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:19:24.277 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.277 [2024-10-16 09:33:48.437977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.277 [2024-10-16 09:33:48.437978] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edc2e0 is same with the state(6) to be set 00:19:24.277 [2024-10-16 09:33:48.437985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.277 [2024-10-16 09:33:48.437993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.277 [2024-10-16 09:33:48.438006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.277 [2024-10-16 09:33:48.438013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.277 [2024-10-16 09:33:48.438021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.277 [2024-10-16 09:33:48.438028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.277 [2024-10-16 09:33:48.438035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.277 [2024-10-16 09:33:48.438043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.277 [2024-10-16 09:33:48.438050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.277 [2024-10-16 09:33:48.438057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.277 [2024-10-16 09:33:48.438064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.277 [2024-10-16 09:33:48.438071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.277 [2024-10-16 09:33:48.438078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.277 [2024-10-16 09:33:48.438085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.277 [2024-10-16 09:33:48.438093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.277 [2024-10-16 09:33:48.438100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.277 [2024-10-16 09:33:48.438107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.277 [2024-10-16 09:33:48.438114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.277 [2024-10-16 09:33:48.438122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.277 [2024-10-16 09:33:48.438130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.277 [2024-10-16 09:33:48.438137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.277 [2024-10-16 09:33:48.438144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.277 [2024-10-16 09:33:48.438151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.277 [2024-10-16 09:33:48.438158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.277 [2024-10-16 09:33:48.438166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.277 [2024-10-16 09:33:48.438173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.277 [2024-10-16 09:33:48.438180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.277 [2024-10-16 09:33:48.438187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.277 [2024-10-16 09:33:48.438194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.277 [2024-10-16 09:33:48.438201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.277 [2024-10-16 09:33:48.438209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.277 [2024-10-16 09:33:48.438216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.277 [2024-10-16 09:33:48.438223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.277 [2024-10-16 09:33:48.438230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.277 [2024-10-16 09:33:48.438238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1004250 is same with the state(6) to be set 00:19:24.278 [2024-10-16 09:33:48.438875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.278 [2024-10-16 09:33:48.438893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.278 [2024-10-16 09:33:48.438916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:120040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.278 [2024-10-16 09:33:48.438926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.278 [2024-10-16 09:33:48.438936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:53880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.278 [2024-10-16 09:33:48.438945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.278 [2024-10-16 09:33:48.438955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:68072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.278 [2024-10-16 09:33:48.438963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.279 [2024-10-16 09:33:48.438974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:51592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.279 [2024-10-16 09:33:48.438982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.279 [2024-10-16 09:33:48.438992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.279 [2024-10-16 09:33:48.439015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.279 [2024-10-16 09:33:48.439025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:83288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.279 [2024-10-16 09:33:48.439049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.279 [2024-10-16 09:33:48.439075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:48992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.279 [2024-10-16 09:33:48.439084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.279 [2024-10-16 09:33:48.439094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:49400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.279 [2024-10-16 09:33:48.439103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.279 [2024-10-16 09:33:48.439113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:114336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.279 [2024-10-16 09:33:48.439122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.279 [2024-10-16 09:33:48.439132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:49712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.279 [2024-10-16 09:33:48.439140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.279 [2024-10-16 09:33:48.439151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:37864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.279 [2024-10-16 09:33:48.439175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.279 [2024-10-16 09:33:48.439185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:64512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.279 [2024-10-16 09:33:48.439194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.279 [2024-10-16 09:33:48.439205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:112216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.279 [2024-10-16 09:33:48.439213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.279 [2024-10-16 09:33:48.439232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:84248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.279 [2024-10-16 09:33:48.439242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.279 [2024-10-16 09:33:48.439253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.279 [2024-10-16 09:33:48.439262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.279 [2024-10-16 09:33:48.439274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:130672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.279 [2024-10-16 09:33:48.439284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.279 [2024-10-16 09:33:48.439295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:28456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.279 [2024-10-16 09:33:48.439304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.279 [2024-10-16 09:33:48.439315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:120240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.279 [2024-10-16 09:33:48.439324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.279 [2024-10-16 09:33:48.439335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.279 [2024-10-16 09:33:48.439344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.279 [2024-10-16 09:33:48.439355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.279 [2024-10-16 09:33:48.439364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.279 [2024-10-16 09:33:48.439374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.279 [2024-10-16 09:33:48.439383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.279 [2024-10-16 09:33:48.439394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.279 [2024-10-16 09:33:48.439402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.279 [2024-10-16 09:33:48.439413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.279 [2024-10-16 09:33:48.439422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.279 [2024-10-16 09:33:48.439432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.279 [2024-10-16 09:33:48.439441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.279 [2024-10-16 09:33:48.439451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:60992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.279 [2024-10-16 09:33:48.439460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.279 [2024-10-16 09:33:48.439471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.279 [2024-10-16 09:33:48.439479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.279 [2024-10-16 09:33:48.439490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:35368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.279 [2024-10-16 09:33:48.439498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.279 [2024-10-16 09:33:48.439509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:44312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.279 [2024-10-16 09:33:48.439517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.279 [2024-10-16 09:33:48.439528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:33904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.279 [2024-10-16 09:33:48.439537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.279 [2024-10-16 09:33:48.439548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:64912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.279 [2024-10-16 09:33:48.439556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.279 [2024-10-16 09:33:48.439567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:128968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.279 [2024-10-16 09:33:48.439576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.279 [2024-10-16 09:33:48.439587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:27016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.279 [2024-10-16 09:33:48.439597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.279 [2024-10-16 09:33:48.439607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:80760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.279 [2024-10-16 09:33:48.439616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.279 [2024-10-16 09:33:48.439638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:35448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.279 [2024-10-16 09:33:48.439650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.279 [2024-10-16 09:33:48.439662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:121264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.279 [2024-10-16 09:33:48.439671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.279 [2024-10-16 09:33:48.439682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.279 [2024-10-16 09:33:48.439691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.279 [2024-10-16 09:33:48.439702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:73232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.279 [2024-10-16 09:33:48.439711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.279 [2024-10-16 09:33:48.439721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:119664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.279 [2024-10-16 09:33:48.439730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.279 [2024-10-16 09:33:48.439741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:80968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.279 [2024-10-16 09:33:48.439750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.279 [2024-10-16 09:33:48.439760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.279 [2024-10-16 09:33:48.439769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.279 [2024-10-16 09:33:48.439779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.279 [2024-10-16 09:33:48.439789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.279 [2024-10-16 09:33:48.439800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:59656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.280 [2024-10-16 09:33:48.439810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.280 [2024-10-16 09:33:48.439820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:68504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.280 [2024-10-16 09:33:48.439829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.280 [2024-10-16 09:33:48.439840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:32760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.280 [2024-10-16 09:33:48.439849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.280 [2024-10-16 09:33:48.439859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.280 [2024-10-16 09:33:48.439869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.280 [2024-10-16 09:33:48.439879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:83800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.280 [2024-10-16 09:33:48.439888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.280 [2024-10-16 09:33:48.439899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.280 [2024-10-16 09:33:48.439908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.280 [2024-10-16 09:33:48.439919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:109464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.280 [2024-10-16 09:33:48.439929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.280 [2024-10-16 09:33:48.439939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.280 [2024-10-16 09:33:48.439948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.280 [2024-10-16 09:33:48.439966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:26088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.280 [2024-10-16 09:33:48.439975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.280 [2024-10-16 09:33:48.439986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.280 [2024-10-16 09:33:48.439994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.280 [2024-10-16 09:33:48.440005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.280 [2024-10-16 09:33:48.440014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.280 [2024-10-16 09:33:48.440024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:63672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.280 [2024-10-16 09:33:48.440033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.280 [2024-10-16 09:33:48.440044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:108504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.280 [2024-10-16 09:33:48.440052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.280 [2024-10-16 09:33:48.440063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:124328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.280 [2024-10-16 09:33:48.440072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.280 [2024-10-16 09:33:48.440082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:110552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.280 [2024-10-16 09:33:48.440091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.280 [2024-10-16 09:33:48.440102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.280 [2024-10-16 09:33:48.440110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.280 [2024-10-16 09:33:48.440121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:68520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.280 [2024-10-16 09:33:48.440131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.280 [2024-10-16 09:33:48.440142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.280 [2024-10-16 09:33:48.440151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.280 [2024-10-16 09:33:48.440161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:129424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.280 [2024-10-16 09:33:48.440170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.280 [2024-10-16 09:33:48.440180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:92408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.280 [2024-10-16 09:33:48.440189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.280 [2024-10-16 09:33:48.440200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:113536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.280 [2024-10-16 09:33:48.440209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.280 [2024-10-16 09:33:48.440219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.280 [2024-10-16 09:33:48.440229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.280 [2024-10-16 09:33:48.440239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:117160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.280 [2024-10-16 09:33:48.440248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.280 [2024-10-16 09:33:48.440259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:65080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.280 [2024-10-16 09:33:48.440267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.280 [2024-10-16 09:33:48.440283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:97520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.280 [2024-10-16 09:33:48.440292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.280 [2024-10-16 09:33:48.440302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:77856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.280 [2024-10-16 09:33:48.440311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.280 [2024-10-16 09:33:48.440322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:94584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.280 [2024-10-16 09:33:48.440331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.280 [2024-10-16 09:33:48.440341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.280 [2024-10-16 09:33:48.440350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.280 [2024-10-16 09:33:48.440361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:123144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.280 [2024-10-16 09:33:48.440369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.280 [2024-10-16 09:33:48.440380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:130288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.280 [2024-10-16 09:33:48.440388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.280 [2024-10-16 09:33:48.440399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:112072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.280 [2024-10-16 09:33:48.440407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.280 [2024-10-16 09:33:48.440418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:62592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.280 [2024-10-16 09:33:48.440427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.280 [2024-10-16 09:33:48.440438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:29792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.280 [2024-10-16 09:33:48.440447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.280 [2024-10-16 09:33:48.440457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:123904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.280 [2024-10-16 09:33:48.440466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.280 [2024-10-16 09:33:48.440477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:81648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.280 [2024-10-16 09:33:48.440485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.280 [2024-10-16 09:33:48.440496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:96624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.280 [2024-10-16 09:33:48.440505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.280 [2024-10-16 09:33:48.440515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:28424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.280 [2024-10-16 09:33:48.440524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.280 [2024-10-16 09:33:48.440535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:122072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.280 [2024-10-16 09:33:48.440553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.280 [2024-10-16 09:33:48.440564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:43192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.281 [2024-10-16 09:33:48.440573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.281 [2024-10-16 09:33:48.440584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:126640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.281 [2024-10-16 09:33:48.440593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.281 [2024-10-16 09:33:48.440608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.281 [2024-10-16 09:33:48.440617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.281 [2024-10-16 09:33:48.440628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:48744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.281 [2024-10-16 09:33:48.440637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.281 [2024-10-16 09:33:48.440647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.281 [2024-10-16 09:33:48.440656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.281 [2024-10-16 09:33:48.440667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.281 [2024-10-16 09:33:48.440676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.281 [2024-10-16 09:33:48.440687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:29728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.281 [2024-10-16 09:33:48.440695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.281 [2024-10-16 09:33:48.440706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:126752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.281 [2024-10-16 09:33:48.440715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.281 [2024-10-16 09:33:48.440725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:44856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.281 [2024-10-16 09:33:48.440734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.281 [2024-10-16 09:33:48.440745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.281 [2024-10-16 09:33:48.440753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.281 [2024-10-16 09:33:48.440764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:71744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.281 [2024-10-16 09:33:48.440773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.281 [2024-10-16 09:33:48.440784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.281 [2024-10-16 09:33:48.440793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.281 [2024-10-16 09:33:48.440803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:29952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.281 [2024-10-16 09:33:48.440812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.281 [2024-10-16 09:33:48.440822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:50360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.281 [2024-10-16 09:33:48.440831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.281 [2024-10-16 09:33:48.440841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:120568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.281 [2024-10-16 09:33:48.440866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.281 [2024-10-16 09:33:48.440876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:58720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.281 [2024-10-16 09:33:48.440885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.281 [2024-10-16 09:33:48.440896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.281 [2024-10-16 09:33:48.440905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.281 [2024-10-16 09:33:48.440916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:66864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.281 [2024-10-16 09:33:48.440925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.281 [2024-10-16 09:33:48.440936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.281 [2024-10-16 09:33:48.440945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.281 [2024-10-16 09:33:48.440956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:54616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.281 [2024-10-16 09:33:48.440975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.281 [2024-10-16 09:33:48.440987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:50912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.281 [2024-10-16 09:33:48.440996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.281 [2024-10-16 09:33:48.441007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.281 [2024-10-16 09:33:48.441016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.281 [2024-10-16 09:33:48.441027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:79392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.281 [2024-10-16 09:33:48.441036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.281 [2024-10-16 09:33:48.441047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:84296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.281 [2024-10-16 09:33:48.441055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.281 [2024-10-16 09:33:48.441066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:16672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.281 [2024-10-16 09:33:48.441075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.281 [2024-10-16 09:33:48.441085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:6344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.281 [2024-10-16 09:33:48.441094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.281 [2024-10-16 09:33:48.441105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:76848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.281 [2024-10-16 09:33:48.441114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.281 [2024-10-16 09:33:48.441126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:75040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.281 [2024-10-16 09:33:48.441135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.281 [2024-10-16 09:33:48.441146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:106704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.281 [2024-10-16 09:33:48.441156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.281 [2024-10-16 09:33:48.441167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:67536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.281 [2024-10-16 09:33:48.441176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.281 [2024-10-16 09:33:48.441187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:84024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.281 [2024-10-16 09:33:48.441196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.281 [2024-10-16 09:33:48.441206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:97376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.282 [2024-10-16 09:33:48.441215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.282 [2024-10-16 09:33:48.441226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:64976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.282 [2024-10-16 09:33:48.441235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.282 [2024-10-16 09:33:48.441246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.282 [2024-10-16 09:33:48.441255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.282 [2024-10-16 09:33:48.441265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:80328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.282 [2024-10-16 09:33:48.441274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.282 [2024-10-16 09:33:48.441286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:106992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.282 [2024-10-16 09:33:48.441295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.282 [2024-10-16 09:33:48.441306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.282 [2024-10-16 09:33:48.441315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.282 [2024-10-16 09:33:48.441326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:114264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.282 [2024-10-16 09:33:48.441335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.282 [2024-10-16 09:33:48.441346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:31728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.282 [2024-10-16 09:33:48.441355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.282 [2024-10-16 09:33:48.441366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.282 [2024-10-16 09:33:48.441375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.282 [2024-10-16 09:33:48.441385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:53384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.282 [2024-10-16 09:33:48.441394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.282 [2024-10-16 09:33:48.441405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:122560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.282 [2024-10-16 09:33:48.441414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.282 [2024-10-16 09:33:48.441424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:42312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.282 [2024-10-16 09:33:48.441434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.282 [2024-10-16 09:33:48.441451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:41328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.282 [2024-10-16 09:33:48.441460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.282 [2024-10-16 09:33:48.441471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:26472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.282 [2024-10-16 09:33:48.441480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.282 [2024-10-16 09:33:48.441492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.282 [2024-10-16 09:33:48.441501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.282 [2024-10-16 09:33:48.441512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.282 [2024-10-16 09:33:48.441521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.282 [2024-10-16 09:33:48.441531] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4a570 is same with the state(6) to be set 00:19:24.282 [2024-10-16 09:33:48.441550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:24.282 [2024-10-16 09:33:48.441560] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:24.282 [2024-10-16 09:33:48.441568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118464 len:8 PRP1 0x0 PRP2 0x0 00:19:24.282 [2024-10-16 09:33:48.441577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.282 [2024-10-16 09:33:48.442525] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f4a570 was disconnected and freed. reset controller. 00:19:24.282 [2024-10-16 09:33:48.442829] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:24.282 [2024-10-16 09:33:48.442856] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1edc2e0 (9): Bad file descriptor 00:19:24.282 [2024-10-16 09:33:48.442959] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:24.282 [2024-10-16 09:33:48.442981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edc2e0 with addr=10.0.0.3, port=4420 00:19:24.282 [2024-10-16 09:33:48.442993] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edc2e0 is same with the state(6) to be set 00:19:24.282 [2024-10-16 09:33:48.443010] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1edc2e0 (9): Bad file descriptor 00:19:24.282 [2024-10-16 09:33:48.443035] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:24.282 [2024-10-16 09:33:48.443045] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:24.282 [2024-10-16 09:33:48.443056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:24.282 [2024-10-16 09:33:48.443076] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:24.282 [2024-10-16 09:33:48.443087] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:24.282 09:33:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 81806 00:19:26.160 10034.00 IOPS, 39.20 MiB/s [2024-10-16T09:33:50.564Z] 6689.33 IOPS, 26.13 MiB/s [2024-10-16T09:33:50.564Z] [2024-10-16 09:33:50.443265] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:26.160 [2024-10-16 09:33:50.443308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edc2e0 with addr=10.0.0.3, port=4420 00:19:26.160 [2024-10-16 09:33:50.443324] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edc2e0 is same with the state(6) to be set 00:19:26.160 [2024-10-16 09:33:50.443345] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1edc2e0 (9): Bad file descriptor 00:19:26.160 [2024-10-16 09:33:50.443373] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:26.161 [2024-10-16 09:33:50.443384] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:26.161 [2024-10-16 09:33:50.443394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:26.161 [2024-10-16 09:33:50.443416] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:26.161 [2024-10-16 09:33:50.443428] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:28.032 5017.00 IOPS, 19.60 MiB/s [2024-10-16T09:33:52.695Z] 4013.60 IOPS, 15.68 MiB/s [2024-10-16T09:33:52.695Z] [2024-10-16 09:33:52.443607] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:28.291 [2024-10-16 09:33:52.443646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edc2e0 with addr=10.0.0.3, port=4420 00:19:28.291 [2024-10-16 09:33:52.443661] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edc2e0 is same with the state(6) to be set 00:19:28.291 [2024-10-16 09:33:52.443681] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1edc2e0 (9): Bad file descriptor 00:19:28.291 [2024-10-16 09:33:52.443699] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:28.291 [2024-10-16 09:33:52.443709] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:28.291 [2024-10-16 09:33:52.443718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:28.291 [2024-10-16 09:33:52.443740] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:28.291 [2024-10-16 09:33:52.443751] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:30.162 3344.67 IOPS, 13.07 MiB/s [2024-10-16T09:33:54.566Z] 2866.86 IOPS, 11.20 MiB/s [2024-10-16T09:33:54.566Z] [2024-10-16 09:33:54.443832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:30.162 [2024-10-16 09:33:54.444020] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:30.162 [2024-10-16 09:33:54.444186] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:30.163 [2024-10-16 09:33:54.444312] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:19:30.163 [2024-10-16 09:33:54.444373] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:31.098 2508.50 IOPS, 9.80 MiB/s 00:19:31.098 Latency(us) 00:19:31.098 [2024-10-16T09:33:55.502Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:31.098 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:19:31.098 NVMe0n1 : 8.17 2455.84 9.59 15.66 0.00 51707.92 6911.07 7015926.69 00:19:31.098 [2024-10-16T09:33:55.502Z] =================================================================================================================== 00:19:31.098 [2024-10-16T09:33:55.503Z] Total : 2455.84 9.59 15.66 0.00 51707.92 6911.07 7015926.69 00:19:31.099 { 00:19:31.099 "results": [ 00:19:31.099 { 00:19:31.099 "job": "NVMe0n1", 00:19:31.099 "core_mask": "0x4", 00:19:31.099 "workload": "randread", 00:19:31.099 "status": "finished", 00:19:31.099 "queue_depth": 128, 00:19:31.099 "io_size": 4096, 00:19:31.099 "runtime": 8.171543, 00:19:31.099 "iops": 2455.839735530976, 00:19:31.099 "mibps": 9.593123966917876, 00:19:31.099 "io_failed": 128, 00:19:31.099 "io_timeout": 0, 00:19:31.099 "avg_latency_us": 51707.91904499541, 00:19:31.099 "min_latency_us": 6911.069090909091, 00:19:31.099 "max_latency_us": 7015926.69090909 00:19:31.099 } 00:19:31.099 ], 00:19:31.099 "core_count": 1 00:19:31.099 } 00:19:31.099 09:33:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:31.099 Attaching 5 probes... 00:19:31.099 1405.177398: reset bdev controller NVMe0 00:19:31.099 1405.251108: reconnect bdev controller NVMe0 00:19:31.099 3405.512512: reconnect delay bdev controller NVMe0 00:19:31.099 3405.545519: reconnect bdev controller NVMe0 00:19:31.099 5405.836813: reconnect delay bdev controller NVMe0 00:19:31.099 5405.852251: reconnect bdev controller NVMe0 00:19:31.099 7406.157659: reconnect delay bdev controller NVMe0 00:19:31.099 7406.172664: reconnect bdev controller NVMe0 00:19:31.099 09:33:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:19:31.099 09:33:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:19:31.099 09:33:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 81769 00:19:31.099 09:33:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:31.099 09:33:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 81756 00:19:31.099 09:33:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 81756 ']' 00:19:31.099 09:33:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 81756 00:19:31.099 09:33:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:19:31.099 09:33:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:31.099 09:33:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81756 00:19:31.357 killing process with pid 81756 00:19:31.358 Received shutdown signal, test time was about 8.233358 seconds 00:19:31.358 00:19:31.358 Latency(us) 00:19:31.358 [2024-10-16T09:33:55.762Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:31.358 [2024-10-16T09:33:55.762Z] =================================================================================================================== 00:19:31.358 [2024-10-16T09:33:55.762Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:31.358 09:33:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:31.358 09:33:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:31.358 09:33:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81756' 00:19:31.358 09:33:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 81756 00:19:31.358 09:33:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 81756 00:19:31.358 09:33:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:31.617 09:33:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:19:31.617 09:33:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:19:31.617 09:33:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@514 -- # nvmfcleanup 00:19:31.617 09:33:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:19:31.617 09:33:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:31.617 09:33:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:19:31.617 09:33:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:31.617 09:33:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:31.617 rmmod nvme_tcp 00:19:31.876 rmmod nvme_fabrics 00:19:31.876 rmmod nvme_keyring 00:19:31.876 09:33:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:31.876 09:33:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:19:31.876 09:33:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:19:31.876 09:33:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@515 -- # '[' -n 81333 ']' 00:19:31.876 09:33:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # killprocess 81333 00:19:31.876 09:33:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 81333 ']' 00:19:31.876 09:33:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 81333 00:19:31.876 09:33:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:19:31.876 09:33:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:31.876 09:33:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81333 00:19:31.876 killing process with pid 81333 00:19:31.876 09:33:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:31.876 09:33:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:31.876 09:33:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81333' 00:19:31.876 09:33:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 81333 00:19:31.876 09:33:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 81333 00:19:32.135 09:33:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:19:32.135 09:33:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:19:32.135 09:33:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:19:32.135 09:33:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:19:32.135 09:33:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@789 -- # iptables-save 00:19:32.135 09:33:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@789 -- # iptables-restore 00:19:32.135 09:33:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:19:32.135 09:33:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:32.135 09:33:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:32.135 09:33:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:32.135 09:33:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:32.135 09:33:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:32.135 09:33:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:32.135 09:33:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:32.135 09:33:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:32.135 09:33:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:32.135 09:33:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:32.135 09:33:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:32.135 09:33:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:32.135 09:33:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:32.135 09:33:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:32.135 09:33:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:32.135 09:33:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:32.135 09:33:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:32.135 09:33:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:32.135 09:33:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:32.135 09:33:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:19:32.135 ************************************ 00:19:32.135 END TEST nvmf_timeout 00:19:32.135 ************************************ 00:19:32.135 00:19:32.135 real 0m45.491s 00:19:32.135 user 2m13.368s 00:19:32.135 sys 0m5.474s 00:19:32.135 09:33:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:32.135 09:33:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:32.394 09:33:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:19:32.394 09:33:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:19:32.394 ************************************ 00:19:32.394 END TEST nvmf_host 00:19:32.394 ************************************ 00:19:32.394 00:19:32.394 real 4m54.211s 00:19:32.394 user 12m47.459s 00:19:32.394 sys 1m8.816s 00:19:32.394 09:33:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:32.394 09:33:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.394 09:33:56 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:19:32.394 09:33:56 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:19:32.394 00:19:32.394 real 12m6.242s 00:19:32.394 user 29m10.724s 00:19:32.394 sys 3m6.529s 00:19:32.394 09:33:56 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:32.394 09:33:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:32.394 ************************************ 00:19:32.394 END TEST nvmf_tcp 00:19:32.394 ************************************ 00:19:32.394 09:33:56 -- spdk/autotest.sh@281 -- # [[ 1 -eq 0 ]] 00:19:32.394 09:33:56 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:19:32.394 09:33:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:32.394 09:33:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:32.394 09:33:56 -- common/autotest_common.sh@10 -- # set +x 00:19:32.394 ************************************ 00:19:32.394 START TEST nvmf_dif 00:19:32.394 ************************************ 00:19:32.394 09:33:56 nvmf_dif -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:19:32.394 * Looking for test storage... 00:19:32.394 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:32.394 09:33:56 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:32.394 09:33:56 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:32.394 09:33:56 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:19:32.394 09:33:56 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:32.394 09:33:56 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:32.394 09:33:56 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:32.394 09:33:56 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:32.394 09:33:56 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:19:32.394 09:33:56 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:19:32.394 09:33:56 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:19:32.394 09:33:56 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:19:32.394 09:33:56 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:19:32.394 09:33:56 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:19:32.394 09:33:56 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:19:32.394 09:33:56 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:32.394 09:33:56 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:19:32.394 09:33:56 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:19:32.394 09:33:56 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:32.394 09:33:56 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:32.394 09:33:56 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:19:32.394 09:33:56 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:19:32.394 09:33:56 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:32.394 09:33:56 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:19:32.654 09:33:56 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:19:32.654 09:33:56 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:19:32.654 09:33:56 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:19:32.654 09:33:56 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:32.654 09:33:56 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:19:32.654 09:33:56 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:19:32.654 09:33:56 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:32.654 09:33:56 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:32.654 09:33:56 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:19:32.654 09:33:56 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:32.654 09:33:56 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:32.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.654 --rc genhtml_branch_coverage=1 00:19:32.654 --rc genhtml_function_coverage=1 00:19:32.654 --rc genhtml_legend=1 00:19:32.654 --rc geninfo_all_blocks=1 00:19:32.654 --rc geninfo_unexecuted_blocks=1 00:19:32.654 00:19:32.654 ' 00:19:32.654 09:33:56 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:32.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.654 --rc genhtml_branch_coverage=1 00:19:32.654 --rc genhtml_function_coverage=1 00:19:32.654 --rc genhtml_legend=1 00:19:32.654 --rc geninfo_all_blocks=1 00:19:32.654 --rc geninfo_unexecuted_blocks=1 00:19:32.654 00:19:32.654 ' 00:19:32.654 09:33:56 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:32.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.654 --rc genhtml_branch_coverage=1 00:19:32.654 --rc genhtml_function_coverage=1 00:19:32.654 --rc genhtml_legend=1 00:19:32.654 --rc geninfo_all_blocks=1 00:19:32.654 --rc geninfo_unexecuted_blocks=1 00:19:32.654 00:19:32.654 ' 00:19:32.654 09:33:56 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:32.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.654 --rc genhtml_branch_coverage=1 00:19:32.654 --rc genhtml_function_coverage=1 00:19:32.654 --rc genhtml_legend=1 00:19:32.654 --rc geninfo_all_blocks=1 00:19:32.654 --rc geninfo_unexecuted_blocks=1 00:19:32.654 00:19:32.654 ' 00:19:32.654 09:33:56 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:32.654 09:33:56 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:19:32.654 09:33:56 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:32.654 09:33:56 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:32.654 09:33:56 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:32.654 09:33:56 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:32.654 09:33:56 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:32.654 09:33:56 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:32.654 09:33:56 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:32.654 09:33:56 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:32.654 09:33:56 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:32.654 09:33:56 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:32.654 09:33:56 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:19:32.654 09:33:56 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5989d9e2-d339-420e-a2f4-bd87604f111f 00:19:32.654 09:33:56 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:32.654 09:33:56 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:32.654 09:33:56 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:32.654 09:33:56 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:32.654 09:33:56 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:32.654 09:33:56 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:19:32.654 09:33:56 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:32.654 09:33:56 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:32.654 09:33:56 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:32.654 09:33:56 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.654 09:33:56 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.654 09:33:56 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.654 09:33:56 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:19:32.654 09:33:56 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.654 09:33:56 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:19:32.654 09:33:56 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:32.654 09:33:56 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:32.654 09:33:56 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:32.654 09:33:56 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:32.654 09:33:56 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:32.654 09:33:56 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:32.654 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:32.654 09:33:56 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:32.654 09:33:56 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:32.654 09:33:56 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:32.654 09:33:56 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:19:32.654 09:33:56 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:19:32.654 09:33:56 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:19:32.654 09:33:56 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:19:32.654 09:33:56 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:19:32.654 09:33:56 nvmf_dif -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:19:32.654 09:33:56 nvmf_dif -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:32.654 09:33:56 nvmf_dif -- nvmf/common.sh@474 -- # prepare_net_devs 00:19:32.654 09:33:56 nvmf_dif -- nvmf/common.sh@436 -- # local -g is_hw=no 00:19:32.654 09:33:56 nvmf_dif -- nvmf/common.sh@438 -- # remove_spdk_ns 00:19:32.654 09:33:56 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:32.654 09:33:56 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:32.654 09:33:56 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:32.654 09:33:56 nvmf_dif -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:19:32.654 09:33:56 nvmf_dif -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:19:32.654 09:33:56 nvmf_dif -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:19:32.654 09:33:56 nvmf_dif -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:19:32.654 09:33:56 nvmf_dif -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:19:32.654 09:33:56 nvmf_dif -- nvmf/common.sh@458 -- # nvmf_veth_init 00:19:32.654 09:33:56 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:32.654 09:33:56 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:32.654 09:33:56 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:32.654 09:33:56 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:32.654 09:33:56 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:32.654 09:33:56 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:32.654 09:33:56 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:32.654 09:33:56 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:32.654 09:33:56 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:32.654 09:33:56 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:32.654 09:33:56 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:32.654 09:33:56 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:32.654 09:33:56 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:32.654 09:33:56 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:32.654 09:33:56 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:32.655 09:33:56 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:32.655 09:33:56 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:32.655 Cannot find device "nvmf_init_br" 00:19:32.655 09:33:56 nvmf_dif -- nvmf/common.sh@162 -- # true 00:19:32.655 09:33:56 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:32.655 Cannot find device "nvmf_init_br2" 00:19:32.655 09:33:56 nvmf_dif -- nvmf/common.sh@163 -- # true 00:19:32.655 09:33:56 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:32.655 Cannot find device "nvmf_tgt_br" 00:19:32.655 09:33:56 nvmf_dif -- nvmf/common.sh@164 -- # true 00:19:32.655 09:33:56 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:32.655 Cannot find device "nvmf_tgt_br2" 00:19:32.655 09:33:56 nvmf_dif -- nvmf/common.sh@165 -- # true 00:19:32.655 09:33:56 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:32.655 Cannot find device "nvmf_init_br" 00:19:32.655 09:33:56 nvmf_dif -- nvmf/common.sh@166 -- # true 00:19:32.655 09:33:56 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:32.655 Cannot find device "nvmf_init_br2" 00:19:32.655 09:33:56 nvmf_dif -- nvmf/common.sh@167 -- # true 00:19:32.655 09:33:56 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:32.655 Cannot find device "nvmf_tgt_br" 00:19:32.655 09:33:56 nvmf_dif -- nvmf/common.sh@168 -- # true 00:19:32.655 09:33:56 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:32.655 Cannot find device "nvmf_tgt_br2" 00:19:32.655 09:33:56 nvmf_dif -- nvmf/common.sh@169 -- # true 00:19:32.655 09:33:56 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:32.655 Cannot find device "nvmf_br" 00:19:32.655 09:33:56 nvmf_dif -- nvmf/common.sh@170 -- # true 00:19:32.655 09:33:56 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:32.655 Cannot find device "nvmf_init_if" 00:19:32.655 09:33:56 nvmf_dif -- nvmf/common.sh@171 -- # true 00:19:32.655 09:33:56 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:32.655 Cannot find device "nvmf_init_if2" 00:19:32.655 09:33:56 nvmf_dif -- nvmf/common.sh@172 -- # true 00:19:32.655 09:33:56 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:32.655 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:32.655 09:33:56 nvmf_dif -- nvmf/common.sh@173 -- # true 00:19:32.655 09:33:56 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:32.655 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:32.655 09:33:56 nvmf_dif -- nvmf/common.sh@174 -- # true 00:19:32.655 09:33:56 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:32.655 09:33:56 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:32.655 09:33:56 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:32.655 09:33:56 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:32.655 09:33:57 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:32.655 09:33:57 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:32.655 09:33:57 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:32.655 09:33:57 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:32.914 09:33:57 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:32.914 09:33:57 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:32.914 09:33:57 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:32.914 09:33:57 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:32.914 09:33:57 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:32.914 09:33:57 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:32.914 09:33:57 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:32.914 09:33:57 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:32.914 09:33:57 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:32.914 09:33:57 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:32.914 09:33:57 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:32.914 09:33:57 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:32.914 09:33:57 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:32.914 09:33:57 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:32.914 09:33:57 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:32.914 09:33:57 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:32.914 09:33:57 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:32.914 09:33:57 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:32.914 09:33:57 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:32.914 09:33:57 nvmf_dif -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:32.914 09:33:57 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:32.914 09:33:57 nvmf_dif -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:32.914 09:33:57 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:32.914 09:33:57 nvmf_dif -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:32.914 09:33:57 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:32.914 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:32.914 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:19:32.914 00:19:32.914 --- 10.0.0.3 ping statistics --- 00:19:32.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:32.914 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:19:32.914 09:33:57 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:32.914 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:32.914 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:19:32.914 00:19:32.914 --- 10.0.0.4 ping statistics --- 00:19:32.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:32.914 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:19:32.914 09:33:57 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:32.914 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:32.914 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:19:32.914 00:19:32.914 --- 10.0.0.1 ping statistics --- 00:19:32.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:32.914 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:19:32.914 09:33:57 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:32.914 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:32.914 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:19:32.914 00:19:32.914 --- 10.0.0.2 ping statistics --- 00:19:32.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:32.914 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:19:32.914 09:33:57 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:32.914 09:33:57 nvmf_dif -- nvmf/common.sh@459 -- # return 0 00:19:32.914 09:33:57 nvmf_dif -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:19:32.914 09:33:57 nvmf_dif -- nvmf/common.sh@477 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:33.173 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:33.173 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:33.173 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:33.431 09:33:57 nvmf_dif -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:33.431 09:33:57 nvmf_dif -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:19:33.431 09:33:57 nvmf_dif -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:19:33.431 09:33:57 nvmf_dif -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:33.431 09:33:57 nvmf_dif -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:19:33.431 09:33:57 nvmf_dif -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:19:33.431 09:33:57 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:19:33.431 09:33:57 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:19:33.431 09:33:57 nvmf_dif -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:33.431 09:33:57 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:33.431 09:33:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:33.431 09:33:57 nvmf_dif -- nvmf/common.sh@507 -- # nvmfpid=82294 00:19:33.431 09:33:57 nvmf_dif -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:33.431 09:33:57 nvmf_dif -- nvmf/common.sh@508 -- # waitforlisten 82294 00:19:33.431 09:33:57 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 82294 ']' 00:19:33.431 09:33:57 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:33.431 09:33:57 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:33.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:33.431 09:33:57 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:33.431 09:33:57 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:33.431 09:33:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:33.431 [2024-10-16 09:33:57.704267] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:19:33.431 [2024-10-16 09:33:57.704871] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:33.690 [2024-10-16 09:33:57.845063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.690 [2024-10-16 09:33:57.895707] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:33.690 [2024-10-16 09:33:57.896024] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:33.690 [2024-10-16 09:33:57.896050] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:33.690 [2024-10-16 09:33:57.896061] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:33.690 [2024-10-16 09:33:57.896070] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:33.690 [2024-10-16 09:33:57.896505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:33.690 [2024-10-16 09:33:57.952693] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:33.690 09:33:58 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:33.690 09:33:58 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:19:33.690 09:33:58 nvmf_dif -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:33.690 09:33:58 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:33.690 09:33:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:33.690 09:33:58 nvmf_dif -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:33.690 09:33:58 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:19:33.690 09:33:58 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:19:33.690 09:33:58 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.690 09:33:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:33.690 [2024-10-16 09:33:58.065051] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:33.690 09:33:58 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.690 09:33:58 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:19:33.690 09:33:58 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:33.690 09:33:58 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:33.690 09:33:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:33.690 ************************************ 00:19:33.690 START TEST fio_dif_1_default 00:19:33.690 ************************************ 00:19:33.690 09:33:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:19:33.690 09:33:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:19:33.690 09:33:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:19:33.690 09:33:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:19:33.690 09:33:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:19:33.690 09:33:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:19:33.690 09:33:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:33.690 09:33:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.690 09:33:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:33.690 bdev_null0 00:19:33.690 09:33:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.690 09:33:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:33.690 09:33:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.690 09:33:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:33.949 09:33:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.949 09:33:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:33.949 09:33:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.949 09:33:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:33.949 09:33:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.949 09:33:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:33.949 09:33:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.949 09:33:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:33.949 [2024-10-16 09:33:58.113229] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:33.950 09:33:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.950 09:33:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:19:33.950 09:33:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:19:33.950 09:33:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:33.950 09:33:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # config=() 00:19:33.950 09:33:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # local subsystem config 00:19:33.950 09:33:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:33.950 09:33:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:19:33.950 09:33:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:33.950 09:33:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:19:33.950 09:33:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:33.950 09:33:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:19:33.950 09:33:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:33.950 09:33:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:33.950 09:33:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:33.950 09:33:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:19:33.950 09:33:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:33.950 09:33:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:33.950 09:33:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:19:33.950 09:33:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:19:33.950 { 00:19:33.950 "params": { 00:19:33.950 "name": "Nvme$subsystem", 00:19:33.950 "trtype": "$TEST_TRANSPORT", 00:19:33.950 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:33.950 "adrfam": "ipv4", 00:19:33.950 "trsvcid": "$NVMF_PORT", 00:19:33.950 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:33.950 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:33.950 "hdgst": ${hdgst:-false}, 00:19:33.950 "ddgst": ${ddgst:-false} 00:19:33.950 }, 00:19:33.950 "method": "bdev_nvme_attach_controller" 00:19:33.950 } 00:19:33.950 EOF 00:19:33.950 )") 00:19:33.950 09:33:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:33.950 09:33:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:19:33.950 09:33:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:19:33.950 09:33:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # cat 00:19:33.950 09:33:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:19:33.950 09:33:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:33.950 09:33:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # jq . 00:19:33.950 09:33:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@583 -- # IFS=, 00:19:33.950 09:33:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:19:33.950 "params": { 00:19:33.950 "name": "Nvme0", 00:19:33.950 "trtype": "tcp", 00:19:33.950 "traddr": "10.0.0.3", 00:19:33.950 "adrfam": "ipv4", 00:19:33.950 "trsvcid": "4420", 00:19:33.950 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:33.950 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:33.950 "hdgst": false, 00:19:33.950 "ddgst": false 00:19:33.950 }, 00:19:33.950 "method": "bdev_nvme_attach_controller" 00:19:33.950 }' 00:19:33.950 09:33:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:33.950 09:33:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:33.950 09:33:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:33.950 09:33:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:19:33.950 09:33:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:33.950 09:33:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:33.950 09:33:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:33.950 09:33:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:33.950 09:33:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:33.950 09:33:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:33.950 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:33.950 fio-3.35 00:19:33.950 Starting 1 thread 00:19:46.157 00:19:46.157 filename0: (groupid=0, jobs=1): err= 0: pid=82353: Wed Oct 16 09:34:08 2024 00:19:46.157 read: IOPS=10.1k, BW=39.4MiB/s (41.3MB/s)(394MiB/10001msec) 00:19:46.157 slat (usec): min=5, max=336, avg= 7.64, stdev= 4.34 00:19:46.157 clat (usec): min=311, max=2688, avg=374.38, stdev=46.82 00:19:46.157 lat (usec): min=317, max=2714, avg=382.02, stdev=47.87 00:19:46.157 clat percentiles (usec): 00:19:46.157 | 1.00th=[ 314], 5.00th=[ 322], 10.00th=[ 330], 20.00th=[ 338], 00:19:46.157 | 30.00th=[ 351], 40.00th=[ 359], 50.00th=[ 367], 60.00th=[ 375], 00:19:46.157 | 70.00th=[ 388], 80.00th=[ 404], 90.00th=[ 424], 95.00th=[ 449], 00:19:46.157 | 99.00th=[ 506], 99.50th=[ 529], 99.90th=[ 685], 99.95th=[ 750], 00:19:46.157 | 99.99th=[ 1614] 00:19:46.157 bw ( KiB/s): min=37856, max=41696, per=99.91%, avg=40270.47, stdev=969.66, samples=19 00:19:46.157 iops : min= 9464, max=10424, avg=10067.58, stdev=242.38, samples=19 00:19:46.157 lat (usec) : 500=98.84%, 750=1.11%, 1000=0.03% 00:19:46.157 lat (msec) : 2=0.01%, 4=0.01% 00:19:46.157 cpu : usr=84.59%, sys=13.31%, ctx=70, majf=0, minf=9 00:19:46.157 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:46.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.157 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.158 issued rwts: total=100780,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.158 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:46.158 00:19:46.158 Run status group 0 (all jobs): 00:19:46.158 READ: bw=39.4MiB/s (41.3MB/s), 39.4MiB/s-39.4MiB/s (41.3MB/s-41.3MB/s), io=394MiB (413MB), run=10001-10001msec 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:46.158 ************************************ 00:19:46.158 END TEST fio_dif_1_default 00:19:46.158 ************************************ 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.158 00:19:46.158 real 0m11.003s 00:19:46.158 user 0m9.095s 00:19:46.158 sys 0m1.616s 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:46.158 09:34:09 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:19:46.158 09:34:09 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:46.158 09:34:09 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:46.158 09:34:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:46.158 ************************************ 00:19:46.158 START TEST fio_dif_1_multi_subsystems 00:19:46.158 ************************************ 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:46.158 bdev_null0 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:46.158 [2024-10-16 09:34:09.171250] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:46.158 bdev_null1 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # config=() 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # local subsystem config 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:19:46.158 { 00:19:46.158 "params": { 00:19:46.158 "name": "Nvme$subsystem", 00:19:46.158 "trtype": "$TEST_TRANSPORT", 00:19:46.158 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.158 "adrfam": "ipv4", 00:19:46.158 "trsvcid": "$NVMF_PORT", 00:19:46.158 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.158 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.158 "hdgst": ${hdgst:-false}, 00:19:46.158 "ddgst": ${ddgst:-false} 00:19:46.158 }, 00:19:46.158 "method": "bdev_nvme_attach_controller" 00:19:46.158 } 00:19:46.158 EOF 00:19:46.158 )") 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:19:46.158 { 00:19:46.158 "params": { 00:19:46.158 "name": "Nvme$subsystem", 00:19:46.158 "trtype": "$TEST_TRANSPORT", 00:19:46.158 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.158 "adrfam": "ipv4", 00:19:46.158 "trsvcid": "$NVMF_PORT", 00:19:46.158 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.158 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.158 "hdgst": ${hdgst:-false}, 00:19:46.158 "ddgst": ${ddgst:-false} 00:19:46.158 }, 00:19:46.158 "method": "bdev_nvme_attach_controller" 00:19:46.158 } 00:19:46.158 EOF 00:19:46.158 )") 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # jq . 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@583 -- # IFS=, 00:19:46.158 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:19:46.158 "params": { 00:19:46.159 "name": "Nvme0", 00:19:46.159 "trtype": "tcp", 00:19:46.159 "traddr": "10.0.0.3", 00:19:46.159 "adrfam": "ipv4", 00:19:46.159 "trsvcid": "4420", 00:19:46.159 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:46.159 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:46.159 "hdgst": false, 00:19:46.159 "ddgst": false 00:19:46.159 }, 00:19:46.159 "method": "bdev_nvme_attach_controller" 00:19:46.159 },{ 00:19:46.159 "params": { 00:19:46.159 "name": "Nvme1", 00:19:46.159 "trtype": "tcp", 00:19:46.159 "traddr": "10.0.0.3", 00:19:46.159 "adrfam": "ipv4", 00:19:46.159 "trsvcid": "4420", 00:19:46.159 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:46.159 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:46.159 "hdgst": false, 00:19:46.159 "ddgst": false 00:19:46.159 }, 00:19:46.159 "method": "bdev_nvme_attach_controller" 00:19:46.159 }' 00:19:46.159 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:46.159 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:46.159 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:46.159 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:46.159 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:19:46.159 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:46.159 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:46.159 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:46.159 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:46.159 09:34:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:46.159 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:46.159 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:46.159 fio-3.35 00:19:46.159 Starting 2 threads 00:19:56.162 00:19:56.162 filename0: (groupid=0, jobs=1): err= 0: pid=82513: Wed Oct 16 09:34:20 2024 00:19:56.162 read: IOPS=5308, BW=20.7MiB/s (21.7MB/s)(207MiB/10001msec) 00:19:56.162 slat (nsec): min=6269, max=71075, avg=12630.05, stdev=4538.43 00:19:56.162 clat (usec): min=401, max=1398, avg=719.59, stdev=61.70 00:19:56.162 lat (usec): min=408, max=1422, avg=732.22, stdev=62.72 00:19:56.162 clat percentiles (usec): 00:19:56.162 | 1.00th=[ 594], 5.00th=[ 627], 10.00th=[ 652], 20.00th=[ 668], 00:19:56.162 | 30.00th=[ 685], 40.00th=[ 701], 50.00th=[ 709], 60.00th=[ 725], 00:19:56.162 | 70.00th=[ 742], 80.00th=[ 766], 90.00th=[ 807], 95.00th=[ 832], 00:19:56.162 | 99.00th=[ 889], 99.50th=[ 906], 99.90th=[ 947], 99.95th=[ 963], 00:19:56.162 | 99.99th=[ 1012] 00:19:56.162 bw ( KiB/s): min=20832, max=21600, per=50.01%, avg=21234.53, stdev=220.24, samples=19 00:19:56.162 iops : min= 5208, max= 5400, avg=5308.63, stdev=55.06, samples=19 00:19:56.162 lat (usec) : 500=0.03%, 750=72.27%, 1000=27.68% 00:19:56.162 lat (msec) : 2=0.01% 00:19:56.162 cpu : usr=90.20%, sys=8.48%, ctx=7, majf=0, minf=0 00:19:56.162 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:56.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.162 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.162 issued rwts: total=53088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:56.162 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:56.162 filename1: (groupid=0, jobs=1): err= 0: pid=82514: Wed Oct 16 09:34:20 2024 00:19:56.162 read: IOPS=5306, BW=20.7MiB/s (21.7MB/s)(207MiB/10001msec) 00:19:56.162 slat (usec): min=4, max=704, avg=12.88, stdev= 6.63 00:19:56.162 clat (usec): min=518, max=1373, avg=718.61, stdev=57.10 00:19:56.162 lat (usec): min=525, max=1390, avg=731.49, stdev=57.88 00:19:56.162 clat percentiles (usec): 00:19:56.162 | 1.00th=[ 627], 5.00th=[ 644], 10.00th=[ 660], 20.00th=[ 668], 00:19:56.162 | 30.00th=[ 685], 40.00th=[ 693], 50.00th=[ 709], 60.00th=[ 725], 00:19:56.162 | 70.00th=[ 742], 80.00th=[ 766], 90.00th=[ 799], 95.00th=[ 824], 00:19:56.162 | 99.00th=[ 889], 99.50th=[ 914], 99.90th=[ 996], 99.95th=[ 1074], 00:19:56.162 | 99.99th=[ 1319] 00:19:56.162 bw ( KiB/s): min=20832, max=21600, per=49.99%, avg=21226.11, stdev=221.46, samples=19 00:19:56.162 iops : min= 5208, max= 5400, avg=5306.53, stdev=55.36, samples=19 00:19:56.162 lat (usec) : 750=74.98%, 1000=24.92% 00:19:56.162 lat (msec) : 2=0.10% 00:19:56.162 cpu : usr=89.80%, sys=8.52%, ctx=137, majf=0, minf=0 00:19:56.162 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:56.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.162 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.162 issued rwts: total=53068,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:56.162 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:56.162 00:19:56.162 Run status group 0 (all jobs): 00:19:56.162 READ: bw=41.5MiB/s (43.5MB/s), 20.7MiB/s-20.7MiB/s (21.7MB/s-21.7MB/s), io=415MiB (435MB), run=10001-10001msec 00:19:56.162 09:34:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:19:56.162 09:34:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:19:56.162 09:34:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:19:56.162 09:34:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:56.162 09:34:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:19:56.162 09:34:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:56.162 09:34:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.162 09:34:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:56.162 09:34:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.162 09:34:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:56.162 09:34:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.162 09:34:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:56.162 09:34:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.163 09:34:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:19:56.163 09:34:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:56.163 09:34:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:19:56.163 09:34:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:56.163 09:34:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.163 09:34:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:56.163 09:34:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.163 09:34:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:56.163 09:34:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.163 09:34:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:56.163 ************************************ 00:19:56.163 END TEST fio_dif_1_multi_subsystems 00:19:56.163 ************************************ 00:19:56.163 09:34:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.163 00:19:56.163 real 0m11.155s 00:19:56.163 user 0m18.788s 00:19:56.163 sys 0m2.028s 00:19:56.163 09:34:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:56.163 09:34:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:56.163 09:34:20 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:19:56.163 09:34:20 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:56.163 09:34:20 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:56.163 09:34:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:56.163 ************************************ 00:19:56.163 START TEST fio_dif_rand_params 00:19:56.163 ************************************ 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:56.163 bdev_null0 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:56.163 [2024-10-16 09:34:20.378241] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:19:56.163 { 00:19:56.163 "params": { 00:19:56.163 "name": "Nvme$subsystem", 00:19:56.163 "trtype": "$TEST_TRANSPORT", 00:19:56.163 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:56.163 "adrfam": "ipv4", 00:19:56.163 "trsvcid": "$NVMF_PORT", 00:19:56.163 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:56.163 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:56.163 "hdgst": ${hdgst:-false}, 00:19:56.163 "ddgst": ${ddgst:-false} 00:19:56.163 }, 00:19:56.163 "method": "bdev_nvme_attach_controller" 00:19:56.163 } 00:19:56.163 EOF 00:19:56.163 )") 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:19:56.163 "params": { 00:19:56.163 "name": "Nvme0", 00:19:56.163 "trtype": "tcp", 00:19:56.163 "traddr": "10.0.0.3", 00:19:56.163 "adrfam": "ipv4", 00:19:56.163 "trsvcid": "4420", 00:19:56.163 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:56.163 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:56.163 "hdgst": false, 00:19:56.163 "ddgst": false 00:19:56.163 }, 00:19:56.163 "method": "bdev_nvme_attach_controller" 00:19:56.163 }' 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:56.163 09:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:56.164 09:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:56.164 09:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:56.164 09:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:56.423 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:19:56.423 ... 00:19:56.423 fio-3.35 00:19:56.423 Starting 3 threads 00:20:02.989 00:20:02.989 filename0: (groupid=0, jobs=1): err= 0: pid=82670: Wed Oct 16 09:34:26 2024 00:20:02.989 read: IOPS=283, BW=35.4MiB/s (37.2MB/s)(177MiB/5006msec) 00:20:02.989 slat (nsec): min=6447, max=44085, avg=9745.68, stdev=4551.47 00:20:02.989 clat (usec): min=8619, max=12083, avg=10559.32, stdev=388.79 00:20:02.989 lat (usec): min=8629, max=12096, avg=10569.06, stdev=388.96 00:20:02.989 clat percentiles (usec): 00:20:02.989 | 1.00th=[10028], 5.00th=[10159], 10.00th=[10290], 20.00th=[10290], 00:20:02.989 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10421], 60.00th=[10552], 00:20:02.989 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11207], 95.00th=[11338], 00:20:02.989 | 99.00th=[11731], 99.50th=[11863], 99.90th=[12125], 99.95th=[12125], 00:20:02.989 | 99.99th=[12125] 00:20:02.989 bw ( KiB/s): min=35328, max=36864, per=33.34%, avg=36256.60, stdev=594.27, samples=10 00:20:02.989 iops : min= 276, max= 288, avg=283.20, stdev= 4.73, samples=10 00:20:02.989 lat (msec) : 10=0.42%, 20=99.58% 00:20:02.989 cpu : usr=91.33%, sys=8.07%, ctx=47, majf=0, minf=0 00:20:02.989 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:02.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.989 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.989 issued rwts: total=1419,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:02.989 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:02.989 filename0: (groupid=0, jobs=1): err= 0: pid=82671: Wed Oct 16 09:34:26 2024 00:20:02.989 read: IOPS=283, BW=35.4MiB/s (37.1MB/s)(177MiB/5001msec) 00:20:02.989 slat (nsec): min=6100, max=75023, avg=10515.22, stdev=5846.18 00:20:02.989 clat (usec): min=6112, max=15462, avg=10567.22, stdev=484.21 00:20:02.989 lat (usec): min=6119, max=15477, avg=10577.74, stdev=484.31 00:20:02.989 clat percentiles (usec): 00:20:02.989 | 1.00th=[10159], 5.00th=[10159], 10.00th=[10159], 20.00th=[10290], 00:20:02.989 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10421], 60.00th=[10552], 00:20:02.989 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11207], 95.00th=[11469], 00:20:02.989 | 99.00th=[11863], 99.50th=[11863], 99.90th=[15401], 99.95th=[15401], 00:20:02.989 | 99.99th=[15401] 00:20:02.989 bw ( KiB/s): min=35328, max=36864, per=33.19%, avg=36096.00, stdev=543.06, samples=9 00:20:02.989 iops : min= 276, max= 288, avg=282.00, stdev= 4.24, samples=9 00:20:02.989 lat (msec) : 10=0.42%, 20=99.58% 00:20:02.989 cpu : usr=91.08%, sys=8.30%, ctx=10, majf=0, minf=9 00:20:02.989 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:02.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.989 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.989 issued rwts: total=1416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:02.989 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:02.989 filename0: (groupid=0, jobs=1): err= 0: pid=82672: Wed Oct 16 09:34:26 2024 00:20:02.989 read: IOPS=283, BW=35.4MiB/s (37.1MB/s)(177MiB/5007msec) 00:20:02.989 slat (nsec): min=6316, max=49635, avg=10223.61, stdev=5118.25 00:20:02.989 clat (usec): min=8134, max=12175, avg=10559.67, stdev=389.18 00:20:02.989 lat (usec): min=8143, max=12195, avg=10569.89, stdev=389.17 00:20:02.989 clat percentiles (usec): 00:20:02.989 | 1.00th=[10028], 5.00th=[10159], 10.00th=[10290], 20.00th=[10290], 00:20:02.989 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10421], 60.00th=[10552], 00:20:02.989 | 70.00th=[10683], 80.00th=[10814], 90.00th=[11076], 95.00th=[11469], 00:20:02.989 | 99.00th=[11731], 99.50th=[11994], 99.90th=[12125], 99.95th=[12125], 00:20:02.989 | 99.99th=[12125] 00:20:02.989 bw ( KiB/s): min=35328, max=36864, per=33.33%, avg=36249.60, stdev=605.81, samples=10 00:20:02.989 iops : min= 276, max= 288, avg=283.20, stdev= 4.73, samples=10 00:20:02.989 lat (msec) : 10=0.42%, 20=99.58% 00:20:02.989 cpu : usr=90.55%, sys=8.85%, ctx=19, majf=0, minf=0 00:20:02.989 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:02.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.989 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.989 issued rwts: total=1419,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:02.989 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:02.989 00:20:02.989 Run status group 0 (all jobs): 00:20:02.989 READ: bw=106MiB/s (111MB/s), 35.4MiB/s-35.4MiB/s (37.1MB/s-37.2MB/s), io=532MiB (558MB), run=5001-5007msec 00:20:02.989 09:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:20:02.989 09:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:02.990 bdev_null0 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:02.990 [2024-10-16 09:34:26.376676] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:02.990 bdev_null1 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:02.990 bdev_null2 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:02.990 { 00:20:02.990 "params": { 00:20:02.990 "name": "Nvme$subsystem", 00:20:02.990 "trtype": "$TEST_TRANSPORT", 00:20:02.990 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:02.990 "adrfam": "ipv4", 00:20:02.990 "trsvcid": "$NVMF_PORT", 00:20:02.990 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:02.990 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:02.990 "hdgst": ${hdgst:-false}, 00:20:02.990 "ddgst": ${ddgst:-false} 00:20:02.990 }, 00:20:02.990 "method": "bdev_nvme_attach_controller" 00:20:02.990 } 00:20:02.990 EOF 00:20:02.990 )") 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:02.990 { 00:20:02.990 "params": { 00:20:02.990 "name": "Nvme$subsystem", 00:20:02.990 "trtype": "$TEST_TRANSPORT", 00:20:02.990 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:02.990 "adrfam": "ipv4", 00:20:02.990 "trsvcid": "$NVMF_PORT", 00:20:02.990 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:02.990 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:02.990 "hdgst": ${hdgst:-false}, 00:20:02.990 "ddgst": ${ddgst:-false} 00:20:02.990 }, 00:20:02.990 "method": "bdev_nvme_attach_controller" 00:20:02.990 } 00:20:02.990 EOF 00:20:02.990 )") 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:02.990 { 00:20:02.990 "params": { 00:20:02.990 "name": "Nvme$subsystem", 00:20:02.990 "trtype": "$TEST_TRANSPORT", 00:20:02.990 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:02.990 "adrfam": "ipv4", 00:20:02.990 "trsvcid": "$NVMF_PORT", 00:20:02.990 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:02.990 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:02.990 "hdgst": ${hdgst:-false}, 00:20:02.990 "ddgst": ${ddgst:-false} 00:20:02.990 }, 00:20:02.990 "method": "bdev_nvme_attach_controller" 00:20:02.990 } 00:20:02.990 EOF 00:20:02.990 )") 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:20:02.990 "params": { 00:20:02.990 "name": "Nvme0", 00:20:02.990 "trtype": "tcp", 00:20:02.990 "traddr": "10.0.0.3", 00:20:02.990 "adrfam": "ipv4", 00:20:02.990 "trsvcid": "4420", 00:20:02.990 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:02.990 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:02.990 "hdgst": false, 00:20:02.990 "ddgst": false 00:20:02.990 }, 00:20:02.990 "method": "bdev_nvme_attach_controller" 00:20:02.990 },{ 00:20:02.990 "params": { 00:20:02.990 "name": "Nvme1", 00:20:02.990 "trtype": "tcp", 00:20:02.990 "traddr": "10.0.0.3", 00:20:02.990 "adrfam": "ipv4", 00:20:02.990 "trsvcid": "4420", 00:20:02.990 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:02.990 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:02.990 "hdgst": false, 00:20:02.990 "ddgst": false 00:20:02.990 }, 00:20:02.990 "method": "bdev_nvme_attach_controller" 00:20:02.990 },{ 00:20:02.990 "params": { 00:20:02.990 "name": "Nvme2", 00:20:02.990 "trtype": "tcp", 00:20:02.990 "traddr": "10.0.0.3", 00:20:02.990 "adrfam": "ipv4", 00:20:02.990 "trsvcid": "4420", 00:20:02.990 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:02.990 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:02.990 "hdgst": false, 00:20:02.990 "ddgst": false 00:20:02.990 }, 00:20:02.990 "method": "bdev_nvme_attach_controller" 00:20:02.990 }' 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:02.990 09:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:02.990 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:02.990 ... 00:20:02.990 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:02.990 ... 00:20:02.990 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:02.990 ... 00:20:02.990 fio-3.35 00:20:02.990 Starting 24 threads 00:20:15.194 00:20:15.194 filename0: (groupid=0, jobs=1): err= 0: pid=82768: Wed Oct 16 09:34:37 2024 00:20:15.194 read: IOPS=220, BW=884KiB/s (905kB/s)(8852KiB/10018msec) 00:20:15.194 slat (usec): min=3, max=8033, avg=30.16, stdev=282.71 00:20:15.194 clat (msec): min=23, max=141, avg=72.26, stdev=22.20 00:20:15.194 lat (msec): min=23, max=141, avg=72.29, stdev=22.19 00:20:15.194 clat percentiles (msec): 00:20:15.194 | 1.00th=[ 38], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 50], 00:20:15.194 | 30.00th=[ 59], 40.00th=[ 67], 50.00th=[ 71], 60.00th=[ 73], 00:20:15.194 | 70.00th=[ 79], 80.00th=[ 92], 90.00th=[ 106], 95.00th=[ 114], 00:20:15.194 | 99.00th=[ 133], 99.50th=[ 136], 99.90th=[ 136], 99.95th=[ 136], 00:20:15.194 | 99.99th=[ 142] 00:20:15.194 bw ( KiB/s): min= 512, max= 1104, per=4.29%, avg=881.10, stdev=177.28, samples=20 00:20:15.194 iops : min= 128, max= 276, avg=220.25, stdev=44.31, samples=20 00:20:15.194 lat (msec) : 50=20.70%, 100=64.66%, 250=14.64% 00:20:15.194 cpu : usr=39.19%, sys=1.61%, ctx=1184, majf=0, minf=9 00:20:15.194 IO depths : 1=0.1%, 2=0.4%, 4=1.4%, 8=82.4%, 16=15.8%, 32=0.0%, >=64=0.0% 00:20:15.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.194 complete : 0=0.0%, 4=87.3%, 8=12.4%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.194 issued rwts: total=2213,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:15.194 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:15.194 filename0: (groupid=0, jobs=1): err= 0: pid=82769: Wed Oct 16 09:34:37 2024 00:20:15.194 read: IOPS=207, BW=829KiB/s (849kB/s)(8296KiB/10006msec) 00:20:15.194 slat (usec): min=4, max=9026, avg=23.58, stdev=206.18 00:20:15.194 clat (msec): min=7, max=183, avg=77.07, stdev=22.98 00:20:15.194 lat (msec): min=7, max=183, avg=77.10, stdev=22.99 00:20:15.195 clat percentiles (msec): 00:20:15.195 | 1.00th=[ 21], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 59], 00:20:15.195 | 30.00th=[ 68], 40.00th=[ 71], 50.00th=[ 74], 60.00th=[ 80], 00:20:15.195 | 70.00th=[ 88], 80.00th=[ 95], 90.00th=[ 107], 95.00th=[ 113], 00:20:15.195 | 99.00th=[ 150], 99.50th=[ 153], 99.90th=[ 153], 99.95th=[ 184], 00:20:15.195 | 99.99th=[ 184] 00:20:15.195 bw ( KiB/s): min= 507, max= 1048, per=3.99%, avg=820.26, stdev=175.31, samples=19 00:20:15.195 iops : min= 126, max= 262, avg=205.00, stdev=43.87, samples=19 00:20:15.195 lat (msec) : 10=0.48%, 20=0.43%, 50=13.02%, 100=70.40%, 250=15.67% 00:20:15.195 cpu : usr=40.00%, sys=1.64%, ctx=1192, majf=0, minf=9 00:20:15.195 IO depths : 1=0.1%, 2=2.5%, 4=10.3%, 8=72.6%, 16=14.6%, 32=0.0%, >=64=0.0% 00:20:15.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.195 complete : 0=0.0%, 4=89.9%, 8=7.8%, 16=2.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.195 issued rwts: total=2074,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:15.195 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:15.195 filename0: (groupid=0, jobs=1): err= 0: pid=82770: Wed Oct 16 09:34:37 2024 00:20:15.195 read: IOPS=225, BW=901KiB/s (922kB/s)(9084KiB/10084msec) 00:20:15.195 slat (usec): min=4, max=8088, avg=32.66, stdev=340.06 00:20:15.195 clat (msec): min=2, max=151, avg=70.81, stdev=27.05 00:20:15.195 lat (msec): min=2, max=151, avg=70.84, stdev=27.05 00:20:15.195 clat percentiles (msec): 00:20:15.195 | 1.00th=[ 3], 5.00th=[ 6], 10.00th=[ 43], 20.00th=[ 51], 00:20:15.195 | 30.00th=[ 64], 40.00th=[ 69], 50.00th=[ 71], 60.00th=[ 74], 00:20:15.195 | 70.00th=[ 82], 80.00th=[ 96], 90.00th=[ 106], 95.00th=[ 112], 00:20:15.195 | 99.00th=[ 126], 99.50th=[ 127], 99.90th=[ 144], 99.95th=[ 144], 00:20:15.195 | 99.99th=[ 153] 00:20:15.195 bw ( KiB/s): min= 616, max= 1920, per=4.39%, avg=901.40, stdev=283.54, samples=20 00:20:15.195 iops : min= 154, max= 480, avg=225.30, stdev=70.90, samples=20 00:20:15.195 lat (msec) : 4=3.61%, 10=2.73%, 20=0.70%, 50=12.59%, 100=64.95% 00:20:15.195 lat (msec) : 250=15.41% 00:20:15.195 cpu : usr=38.76%, sys=1.60%, ctx=1110, majf=0, minf=0 00:20:15.195 IO depths : 1=0.3%, 2=1.1%, 4=3.7%, 8=78.8%, 16=16.1%, 32=0.0%, >=64=0.0% 00:20:15.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.195 complete : 0=0.0%, 4=88.6%, 8=10.6%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.195 issued rwts: total=2271,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:15.195 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:15.195 filename0: (groupid=0, jobs=1): err= 0: pid=82771: Wed Oct 16 09:34:37 2024 00:20:15.195 read: IOPS=230, BW=923KiB/s (945kB/s)(9232KiB/10003msec) 00:20:15.195 slat (usec): min=4, max=10046, avg=37.77, stdev=338.61 00:20:15.195 clat (msec): min=2, max=221, avg=69.20, stdev=25.30 00:20:15.195 lat (msec): min=2, max=221, avg=69.24, stdev=25.30 00:20:15.195 clat percentiles (msec): 00:20:15.195 | 1.00th=[ 6], 5.00th=[ 38], 10.00th=[ 45], 20.00th=[ 48], 00:20:15.195 | 30.00th=[ 55], 40.00th=[ 64], 50.00th=[ 70], 60.00th=[ 72], 00:20:15.195 | 70.00th=[ 77], 80.00th=[ 91], 90.00th=[ 104], 95.00th=[ 109], 00:20:15.195 | 99.00th=[ 121], 99.50th=[ 186], 99.90th=[ 186], 99.95th=[ 222], 00:20:15.195 | 99.99th=[ 222] 00:20:15.195 bw ( KiB/s): min= 512, max= 1128, per=4.37%, avg=897.68, stdev=181.44, samples=19 00:20:15.195 iops : min= 128, max= 282, avg=224.42, stdev=45.36, samples=19 00:20:15.195 lat (msec) : 4=0.56%, 10=2.08%, 20=0.39%, 50=23.27%, 100=62.35% 00:20:15.195 lat (msec) : 250=11.35% 00:20:15.195 cpu : usr=37.60%, sys=1.66%, ctx=1574, majf=0, minf=9 00:20:15.195 IO depths : 1=0.1%, 2=0.4%, 4=1.7%, 8=82.2%, 16=15.6%, 32=0.0%, >=64=0.0% 00:20:15.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.195 complete : 0=0.0%, 4=87.2%, 8=12.4%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.195 issued rwts: total=2308,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:15.195 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:15.195 filename0: (groupid=0, jobs=1): err= 0: pid=82772: Wed Oct 16 09:34:37 2024 00:20:15.195 read: IOPS=212, BW=852KiB/s (872kB/s)(8556KiB/10046msec) 00:20:15.195 slat (usec): min=4, max=8040, avg=35.55, stdev=363.45 00:20:15.195 clat (msec): min=23, max=155, avg=74.95, stdev=23.03 00:20:15.195 lat (msec): min=23, max=155, avg=74.99, stdev=23.04 00:20:15.195 clat percentiles (msec): 00:20:15.195 | 1.00th=[ 38], 5.00th=[ 46], 10.00th=[ 47], 20.00th=[ 51], 00:20:15.195 | 30.00th=[ 63], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 75], 00:20:15.195 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 114], 00:20:15.195 | 99.00th=[ 136], 99.50th=[ 150], 99.90th=[ 155], 99.95th=[ 157], 00:20:15.195 | 99.99th=[ 157] 00:20:15.195 bw ( KiB/s): min= 624, max= 1080, per=4.13%, avg=848.95, stdev=181.67, samples=20 00:20:15.195 iops : min= 156, max= 270, avg=212.20, stdev=45.42, samples=20 00:20:15.195 lat (msec) : 50=18.56%, 100=64.33%, 250=17.11% 00:20:15.195 cpu : usr=37.23%, sys=1.44%, ctx=1602, majf=0, minf=9 00:20:15.195 IO depths : 1=0.1%, 2=1.6%, 4=6.3%, 8=76.9%, 16=15.1%, 32=0.0%, >=64=0.0% 00:20:15.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.195 complete : 0=0.0%, 4=88.7%, 8=9.9%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.195 issued rwts: total=2139,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:15.195 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:15.195 filename0: (groupid=0, jobs=1): err= 0: pid=82773: Wed Oct 16 09:34:37 2024 00:20:15.195 read: IOPS=219, BW=876KiB/s (897kB/s)(8812KiB/10058msec) 00:20:15.195 slat (usec): min=5, max=11040, avg=37.34, stdev=377.98 00:20:15.195 clat (msec): min=12, max=139, avg=72.79, stdev=21.78 00:20:15.195 lat (msec): min=12, max=139, avg=72.83, stdev=21.79 00:20:15.195 clat percentiles (msec): 00:20:15.195 | 1.00th=[ 14], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 51], 00:20:15.195 | 30.00th=[ 63], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 74], 00:20:15.195 | 70.00th=[ 81], 80.00th=[ 95], 90.00th=[ 105], 95.00th=[ 111], 00:20:15.195 | 99.00th=[ 122], 99.50th=[ 124], 99.90th=[ 131], 99.95th=[ 132], 00:20:15.195 | 99.99th=[ 140] 00:20:15.195 bw ( KiB/s): min= 656, max= 1128, per=4.26%, avg=874.45, stdev=154.58, samples=20 00:20:15.195 iops : min= 164, max= 282, avg=218.60, stdev=38.65, samples=20 00:20:15.195 lat (msec) : 20=1.36%, 50=17.75%, 100=67.54%, 250=13.35% 00:20:15.195 cpu : usr=40.86%, sys=1.56%, ctx=1196, majf=0, minf=9 00:20:15.195 IO depths : 1=0.1%, 2=0.9%, 4=3.4%, 8=79.8%, 16=15.8%, 32=0.0%, >=64=0.0% 00:20:15.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.195 complete : 0=0.0%, 4=88.1%, 8=11.2%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.195 issued rwts: total=2203,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:15.195 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:15.195 filename0: (groupid=0, jobs=1): err= 0: pid=82774: Wed Oct 16 09:34:37 2024 00:20:15.195 read: IOPS=180, BW=721KiB/s (738kB/s)(7232KiB/10035msec) 00:20:15.195 slat (usec): min=3, max=4051, avg=22.51, stdev=164.32 00:20:15.195 clat (msec): min=47, max=153, avg=88.58, stdev=20.54 00:20:15.195 lat (msec): min=47, max=153, avg=88.60, stdev=20.54 00:20:15.195 clat percentiles (msec): 00:20:15.195 | 1.00th=[ 58], 5.00th=[ 67], 10.00th=[ 68], 20.00th=[ 70], 00:20:15.195 | 30.00th=[ 73], 40.00th=[ 77], 50.00th=[ 89], 60.00th=[ 93], 00:20:15.195 | 70.00th=[ 97], 80.00th=[ 106], 90.00th=[ 113], 95.00th=[ 136], 00:20:15.195 | 99.00th=[ 146], 99.50th=[ 146], 99.90th=[ 155], 99.95th=[ 155], 00:20:15.195 | 99.99th=[ 155] 00:20:15.195 bw ( KiB/s): min= 512, max= 896, per=3.49%, avg=716.70, stdev=138.81, samples=20 00:20:15.195 iops : min= 128, max= 224, avg=179.15, stdev=34.72, samples=20 00:20:15.195 lat (msec) : 50=0.77%, 100=74.50%, 250=24.72% 00:20:15.195 cpu : usr=47.75%, sys=2.13%, ctx=1314, majf=0, minf=9 00:20:15.195 IO depths : 1=0.1%, 2=6.3%, 4=25.0%, 8=56.2%, 16=12.4%, 32=0.0%, >=64=0.0% 00:20:15.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.195 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.195 issued rwts: total=1808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:15.195 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:15.195 filename0: (groupid=0, jobs=1): err= 0: pid=82775: Wed Oct 16 09:34:37 2024 00:20:15.195 read: IOPS=230, BW=921KiB/s (943kB/s)(9208KiB/10001msec) 00:20:15.195 slat (usec): min=3, max=8050, avg=45.76, stdev=471.71 00:20:15.195 clat (usec): min=1507, max=216630, avg=69306.14, stdev=25757.95 00:20:15.195 lat (usec): min=1515, max=216643, avg=69351.90, stdev=25760.60 00:20:15.195 clat percentiles (msec): 00:20:15.195 | 1.00th=[ 4], 5.00th=[ 36], 10.00th=[ 46], 20.00th=[ 48], 00:20:15.195 | 30.00th=[ 57], 40.00th=[ 64], 50.00th=[ 71], 60.00th=[ 72], 00:20:15.195 | 70.00th=[ 79], 80.00th=[ 89], 90.00th=[ 106], 95.00th=[ 108], 00:20:15.195 | 99.00th=[ 121], 99.50th=[ 176], 99.90th=[ 176], 99.95th=[ 218], 00:20:15.195 | 99.99th=[ 218] 00:20:15.195 bw ( KiB/s): min= 508, max= 1096, per=4.34%, avg=891.16, stdev=177.47, samples=19 00:20:15.195 iops : min= 127, max= 274, avg=222.79, stdev=44.37, samples=19 00:20:15.195 lat (msec) : 2=0.56%, 4=0.56%, 10=2.22%, 20=0.52%, 50=23.72% 00:20:15.195 lat (msec) : 100=60.64%, 250=11.77% 00:20:15.195 cpu : usr=32.30%, sys=1.35%, ctx=922, majf=0, minf=9 00:20:15.195 IO depths : 1=0.1%, 2=0.4%, 4=1.7%, 8=82.1%, 16=15.8%, 32=0.0%, >=64=0.0% 00:20:15.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.195 complete : 0=0.0%, 4=87.4%, 8=12.2%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.195 issued rwts: total=2302,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:15.195 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:15.195 filename1: (groupid=0, jobs=1): err= 0: pid=82776: Wed Oct 16 09:34:37 2024 00:20:15.195 read: IOPS=219, BW=878KiB/s (899kB/s)(8784KiB/10007msec) 00:20:15.195 slat (usec): min=6, max=8066, avg=41.05, stdev=360.57 00:20:15.195 clat (msec): min=7, max=233, avg=72.74, stdev=24.32 00:20:15.195 lat (msec): min=7, max=233, avg=72.78, stdev=24.32 00:20:15.195 clat percentiles (msec): 00:20:15.195 | 1.00th=[ 23], 5.00th=[ 43], 10.00th=[ 46], 20.00th=[ 50], 00:20:15.195 | 30.00th=[ 60], 40.00th=[ 69], 50.00th=[ 71], 60.00th=[ 73], 00:20:15.195 | 70.00th=[ 81], 80.00th=[ 93], 90.00th=[ 106], 95.00th=[ 114], 00:20:15.195 | 99.00th=[ 124], 99.50th=[ 199], 99.90th=[ 199], 99.95th=[ 234], 00:20:15.195 | 99.99th=[ 234] 00:20:15.195 bw ( KiB/s): min= 492, max= 1120, per=4.26%, avg=874.63, stdev=194.47, samples=19 00:20:15.195 iops : min= 123, max= 280, avg=218.63, stdev=48.60, samples=19 00:20:15.195 lat (msec) : 10=0.27%, 20=0.14%, 50=20.26%, 100=65.94%, 250=13.39% 00:20:15.195 cpu : usr=38.04%, sys=1.55%, ctx=1235, majf=0, minf=9 00:20:15.195 IO depths : 1=0.1%, 2=0.7%, 4=2.9%, 8=80.8%, 16=15.6%, 32=0.0%, >=64=0.0% 00:20:15.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.195 complete : 0=0.0%, 4=87.7%, 8=11.7%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.195 issued rwts: total=2196,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:15.195 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:15.195 filename1: (groupid=0, jobs=1): err= 0: pid=82777: Wed Oct 16 09:34:37 2024 00:20:15.195 read: IOPS=205, BW=822KiB/s (842kB/s)(8276KiB/10069msec) 00:20:15.196 slat (usec): min=4, max=8048, avg=29.27, stdev=310.14 00:20:15.196 clat (msec): min=5, max=147, avg=77.62, stdev=23.00 00:20:15.196 lat (msec): min=5, max=148, avg=77.65, stdev=23.01 00:20:15.196 clat percentiles (msec): 00:20:15.196 | 1.00th=[ 10], 5.00th=[ 45], 10.00th=[ 50], 20.00th=[ 65], 00:20:15.196 | 30.00th=[ 69], 40.00th=[ 71], 50.00th=[ 74], 60.00th=[ 81], 00:20:15.196 | 70.00th=[ 91], 80.00th=[ 96], 90.00th=[ 106], 95.00th=[ 112], 00:20:15.196 | 99.00th=[ 129], 99.50th=[ 148], 99.90th=[ 148], 99.95th=[ 148], 00:20:15.196 | 99.99th=[ 148] 00:20:15.196 bw ( KiB/s): min= 632, max= 1152, per=4.00%, avg=822.10, stdev=149.66, samples=20 00:20:15.196 iops : min= 158, max= 288, avg=205.50, stdev=37.40, samples=20 00:20:15.196 lat (msec) : 10=2.22%, 20=0.87%, 50=7.73%, 100=72.02%, 250=17.16% 00:20:15.196 cpu : usr=41.50%, sys=1.73%, ctx=1289, majf=0, minf=9 00:20:15.196 IO depths : 1=0.1%, 2=3.0%, 4=11.7%, 8=70.4%, 16=14.8%, 32=0.0%, >=64=0.0% 00:20:15.196 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.196 complete : 0=0.0%, 4=90.8%, 8=6.6%, 16=2.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.196 issued rwts: total=2069,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:15.196 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:15.196 filename1: (groupid=0, jobs=1): err= 0: pid=82778: Wed Oct 16 09:34:37 2024 00:20:15.196 read: IOPS=226, BW=906KiB/s (928kB/s)(9144KiB/10088msec) 00:20:15.196 slat (usec): min=3, max=4022, avg=19.23, stdev=115.82 00:20:15.196 clat (msec): min=2, max=142, avg=70.40, stdev=28.40 00:20:15.196 lat (msec): min=2, max=142, avg=70.42, stdev=28.40 00:20:15.196 clat percentiles (msec): 00:20:15.196 | 1.00th=[ 3], 5.00th=[ 6], 10.00th=[ 43], 20.00th=[ 48], 00:20:15.196 | 30.00th=[ 57], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 73], 00:20:15.196 | 70.00th=[ 79], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 121], 00:20:15.196 | 99.00th=[ 132], 99.50th=[ 142], 99.90th=[ 142], 99.95th=[ 142], 00:20:15.196 | 99.99th=[ 142] 00:20:15.196 bw ( KiB/s): min= 512, max= 1920, per=4.42%, avg=907.30, stdev=303.67, samples=20 00:20:15.196 iops : min= 128, max= 480, avg=226.80, stdev=75.92, samples=20 00:20:15.196 lat (msec) : 4=4.20%, 10=1.40%, 20=1.31%, 50=17.19%, 100=59.19% 00:20:15.196 lat (msec) : 250=16.71% 00:20:15.196 cpu : usr=40.15%, sys=1.30%, ctx=1180, majf=0, minf=0 00:20:15.196 IO depths : 1=0.3%, 2=1.6%, 4=5.4%, 8=77.4%, 16=15.3%, 32=0.0%, >=64=0.0% 00:20:15.196 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.196 complete : 0=0.0%, 4=88.7%, 8=10.1%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.196 issued rwts: total=2286,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:15.196 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:15.196 filename1: (groupid=0, jobs=1): err= 0: pid=82779: Wed Oct 16 09:34:37 2024 00:20:15.196 read: IOPS=213, BW=855KiB/s (876kB/s)(8592KiB/10046msec) 00:20:15.196 slat (usec): min=3, max=8049, avg=40.47, stdev=368.33 00:20:15.196 clat (msec): min=33, max=138, avg=74.48, stdev=20.21 00:20:15.196 lat (msec): min=33, max=138, avg=74.52, stdev=20.21 00:20:15.196 clat percentiles (msec): 00:20:15.196 | 1.00th=[ 42], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 57], 00:20:15.196 | 30.00th=[ 65], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 75], 00:20:15.196 | 70.00th=[ 83], 80.00th=[ 95], 90.00th=[ 107], 95.00th=[ 111], 00:20:15.196 | 99.00th=[ 122], 99.50th=[ 122], 99.90th=[ 133], 99.95th=[ 136], 00:20:15.196 | 99.99th=[ 140] 00:20:15.196 bw ( KiB/s): min= 640, max= 1072, per=4.15%, avg=852.70, stdev=152.32, samples=20 00:20:15.196 iops : min= 160, max= 268, avg=213.15, stdev=38.08, samples=20 00:20:15.196 lat (msec) : 50=16.48%, 100=70.25%, 250=13.27% 00:20:15.196 cpu : usr=38.84%, sys=1.66%, ctx=1017, majf=0, minf=9 00:20:15.196 IO depths : 1=0.1%, 2=0.8%, 4=3.1%, 8=80.0%, 16=16.0%, 32=0.0%, >=64=0.0% 00:20:15.196 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.196 complete : 0=0.0%, 4=88.1%, 8=11.2%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.196 issued rwts: total=2148,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:15.196 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:15.196 filename1: (groupid=0, jobs=1): err= 0: pid=82780: Wed Oct 16 09:34:37 2024 00:20:15.196 read: IOPS=219, BW=876KiB/s (897kB/s)(8812KiB/10058msec) 00:20:15.196 slat (usec): min=3, max=8031, avg=29.99, stdev=270.78 00:20:15.196 clat (msec): min=23, max=143, avg=72.79, stdev=21.39 00:20:15.196 lat (msec): min=23, max=143, avg=72.82, stdev=21.39 00:20:15.196 clat percentiles (msec): 00:20:15.196 | 1.00th=[ 33], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 51], 00:20:15.196 | 30.00th=[ 62], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 73], 00:20:15.196 | 70.00th=[ 81], 80.00th=[ 95], 90.00th=[ 106], 95.00th=[ 110], 00:20:15.196 | 99.00th=[ 125], 99.50th=[ 125], 99.90th=[ 132], 99.95th=[ 133], 00:20:15.196 | 99.99th=[ 144] 00:20:15.196 bw ( KiB/s): min= 632, max= 1128, per=4.26%, avg=874.60, stdev=167.95, samples=20 00:20:15.196 iops : min= 158, max= 282, avg=218.65, stdev=41.99, samples=20 00:20:15.196 lat (msec) : 50=19.43%, 100=66.55%, 250=14.03% 00:20:15.196 cpu : usr=40.74%, sys=1.24%, ctx=1080, majf=0, minf=9 00:20:15.196 IO depths : 1=0.1%, 2=0.6%, 4=2.4%, 8=81.0%, 16=16.0%, 32=0.0%, >=64=0.0% 00:20:15.196 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.196 complete : 0=0.0%, 4=87.9%, 8=11.6%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.196 issued rwts: total=2203,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:15.196 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:15.196 filename1: (groupid=0, jobs=1): err= 0: pid=82781: Wed Oct 16 09:34:37 2024 00:20:15.196 read: IOPS=219, BW=879KiB/s (900kB/s)(8824KiB/10035msec) 00:20:15.196 slat (usec): min=3, max=8028, avg=25.60, stdev=209.28 00:20:15.196 clat (msec): min=22, max=151, avg=72.67, stdev=21.90 00:20:15.196 lat (msec): min=22, max=151, avg=72.69, stdev=21.90 00:20:15.196 clat percentiles (msec): 00:20:15.196 | 1.00th=[ 39], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 52], 00:20:15.196 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 71], 60.00th=[ 73], 00:20:15.196 | 70.00th=[ 79], 80.00th=[ 93], 90.00th=[ 106], 95.00th=[ 112], 00:20:15.196 | 99.00th=[ 128], 99.50th=[ 153], 99.90th=[ 153], 99.95th=[ 153], 00:20:15.196 | 99.99th=[ 153] 00:20:15.196 bw ( KiB/s): min= 552, max= 1096, per=4.26%, avg=875.85, stdev=172.16, samples=20 00:20:15.196 iops : min= 138, max= 274, avg=218.95, stdev=43.03, samples=20 00:20:15.196 lat (msec) : 50=19.31%, 100=66.73%, 250=13.96% 00:20:15.196 cpu : usr=40.76%, sys=1.74%, ctx=1165, majf=0, minf=9 00:20:15.196 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=82.9%, 16=16.1%, 32=0.0%, >=64=0.0% 00:20:15.196 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.196 complete : 0=0.0%, 4=87.3%, 8=12.6%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.196 issued rwts: total=2206,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:15.196 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:15.196 filename1: (groupid=0, jobs=1): err= 0: pid=82782: Wed Oct 16 09:34:37 2024 00:20:15.196 read: IOPS=205, BW=822KiB/s (841kB/s)(8220KiB/10006msec) 00:20:15.196 slat (usec): min=5, max=8041, avg=38.95, stdev=351.24 00:20:15.196 clat (msec): min=7, max=235, avg=77.70, stdev=23.27 00:20:15.196 lat (msec): min=7, max=235, avg=77.74, stdev=23.28 00:20:15.196 clat percentiles (msec): 00:20:15.196 | 1.00th=[ 22], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 61], 00:20:15.196 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 79], 00:20:15.196 | 70.00th=[ 90], 80.00th=[ 99], 90.00th=[ 106], 95.00th=[ 110], 00:20:15.196 | 99.00th=[ 122], 99.50th=[ 201], 99.90th=[ 201], 99.95th=[ 236], 00:20:15.196 | 99.99th=[ 236] 00:20:15.196 bw ( KiB/s): min= 492, max= 1056, per=3.96%, avg=813.95, stdev=167.15, samples=19 00:20:15.196 iops : min= 123, max= 264, avg=203.47, stdev=41.77, samples=19 00:20:15.196 lat (msec) : 10=0.29%, 20=0.24%, 50=12.41%, 100=69.68%, 250=17.37% 00:20:15.196 cpu : usr=39.22%, sys=1.54%, ctx=1202, majf=0, minf=9 00:20:15.196 IO depths : 1=0.1%, 2=2.9%, 4=11.6%, 8=71.2%, 16=14.3%, 32=0.0%, >=64=0.0% 00:20:15.196 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.196 complete : 0=0.0%, 4=90.3%, 8=7.2%, 16=2.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.196 issued rwts: total=2055,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:15.196 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:15.196 filename1: (groupid=0, jobs=1): err= 0: pid=82783: Wed Oct 16 09:34:37 2024 00:20:15.196 read: IOPS=202, BW=811KiB/s (831kB/s)(8124KiB/10015msec) 00:20:15.196 slat (usec): min=4, max=8024, avg=22.91, stdev=198.88 00:20:15.196 clat (msec): min=35, max=206, avg=78.71, stdev=22.52 00:20:15.196 lat (msec): min=35, max=206, avg=78.74, stdev=22.52 00:20:15.196 clat percentiles (msec): 00:20:15.196 | 1.00th=[ 40], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 61], 00:20:15.196 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 82], 00:20:15.196 | 70.00th=[ 90], 80.00th=[ 99], 90.00th=[ 107], 95.00th=[ 115], 00:20:15.196 | 99.00th=[ 146], 99.50th=[ 174], 99.90th=[ 174], 99.95th=[ 207], 00:20:15.196 | 99.99th=[ 207] 00:20:15.196 bw ( KiB/s): min= 496, max= 1024, per=3.94%, avg=810.58, stdev=179.81, samples=19 00:20:15.196 iops : min= 124, max= 256, avg=202.58, stdev=44.95, samples=19 00:20:15.196 lat (msec) : 50=15.31%, 100=65.98%, 250=18.71% 00:20:15.196 cpu : usr=34.35%, sys=1.34%, ctx=1106, majf=0, minf=9 00:20:15.196 IO depths : 1=0.1%, 2=2.3%, 4=9.4%, 8=73.4%, 16=14.9%, 32=0.0%, >=64=0.0% 00:20:15.196 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.196 complete : 0=0.0%, 4=89.8%, 8=8.2%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.196 issued rwts: total=2031,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:15.196 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:15.196 filename2: (groupid=0, jobs=1): err= 0: pid=82784: Wed Oct 16 09:34:37 2024 00:20:15.196 read: IOPS=211, BW=848KiB/s (868kB/s)(8520KiB/10048msec) 00:20:15.196 slat (usec): min=6, max=7032, avg=25.37, stdev=206.33 00:20:15.196 clat (msec): min=35, max=126, avg=75.24, stdev=19.97 00:20:15.196 lat (msec): min=35, max=127, avg=75.26, stdev=19.96 00:20:15.196 clat percentiles (msec): 00:20:15.196 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 59], 00:20:15.196 | 30.00th=[ 67], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 75], 00:20:15.196 | 70.00th=[ 83], 80.00th=[ 95], 90.00th=[ 106], 95.00th=[ 111], 00:20:15.196 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 127], 99.95th=[ 127], 00:20:15.196 | 99.99th=[ 128] 00:20:15.196 bw ( KiB/s): min= 624, max= 1072, per=4.12%, avg=847.45, stdev=143.77, samples=20 00:20:15.196 iops : min= 156, max= 268, avg=211.80, stdev=35.90, samples=20 00:20:15.196 lat (msec) : 50=13.80%, 100=71.88%, 250=14.32% 00:20:15.196 cpu : usr=37.32%, sys=1.20%, ctx=1587, majf=0, minf=9 00:20:15.196 IO depths : 1=0.1%, 2=0.8%, 4=3.0%, 8=80.2%, 16=16.1%, 32=0.0%, >=64=0.0% 00:20:15.196 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.196 complete : 0=0.0%, 4=88.2%, 8=11.2%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.196 issued rwts: total=2130,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:15.196 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:15.196 filename2: (groupid=0, jobs=1): err= 0: pid=82785: Wed Oct 16 09:34:37 2024 00:20:15.196 read: IOPS=217, BW=869KiB/s (890kB/s)(8700KiB/10007msec) 00:20:15.196 slat (usec): min=4, max=8047, avg=41.15, stdev=362.09 00:20:15.197 clat (msec): min=6, max=217, avg=73.43, stdev=24.74 00:20:15.197 lat (msec): min=6, max=217, avg=73.47, stdev=24.74 00:20:15.197 clat percentiles (msec): 00:20:15.197 | 1.00th=[ 9], 5.00th=[ 42], 10.00th=[ 47], 20.00th=[ 51], 00:20:15.197 | 30.00th=[ 63], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 75], 00:20:15.197 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 105], 95.00th=[ 110], 00:20:15.197 | 99.00th=[ 121], 99.50th=[ 184], 99.90th=[ 184], 99.95th=[ 218], 00:20:15.197 | 99.99th=[ 218] 00:20:15.197 bw ( KiB/s): min= 496, max= 1120, per=4.15%, avg=853.47, stdev=195.16, samples=19 00:20:15.197 iops : min= 124, max= 280, avg=213.37, stdev=48.79, samples=19 00:20:15.197 lat (msec) : 10=1.47%, 20=0.46%, 50=18.30%, 100=63.72%, 250=16.05% 00:20:15.197 cpu : usr=42.68%, sys=1.58%, ctx=1224, majf=0, minf=9 00:20:15.197 IO depths : 1=0.1%, 2=1.9%, 4=7.5%, 8=75.8%, 16=14.8%, 32=0.0%, >=64=0.0% 00:20:15.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.197 complete : 0=0.0%, 4=88.9%, 8=9.4%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.197 issued rwts: total=2175,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:15.197 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:15.197 filename2: (groupid=0, jobs=1): err= 0: pid=82786: Wed Oct 16 09:34:37 2024 00:20:15.197 read: IOPS=218, BW=875KiB/s (896kB/s)(8784KiB/10042msec) 00:20:15.197 slat (usec): min=3, max=8043, avg=27.68, stdev=296.38 00:20:15.197 clat (msec): min=23, max=151, avg=73.04, stdev=22.10 00:20:15.197 lat (msec): min=24, max=151, avg=73.07, stdev=22.10 00:20:15.197 clat percentiles (msec): 00:20:15.197 | 1.00th=[ 37], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 49], 00:20:15.197 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 72], 00:20:15.197 | 70.00th=[ 81], 80.00th=[ 95], 90.00th=[ 108], 95.00th=[ 112], 00:20:15.197 | 99.00th=[ 132], 99.50th=[ 150], 99.90th=[ 150], 99.95th=[ 153], 00:20:15.197 | 99.99th=[ 153] 00:20:15.197 bw ( KiB/s): min= 504, max= 1096, per=4.24%, avg=871.90, stdev=171.33, samples=20 00:20:15.197 iops : min= 126, max= 274, avg=217.95, stdev=42.83, samples=20 00:20:15.197 lat (msec) : 50=21.58%, 100=65.16%, 250=13.25% 00:20:15.197 cpu : usr=32.01%, sys=1.25%, ctx=879, majf=0, minf=9 00:20:15.197 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=82.9%, 16=16.2%, 32=0.0%, >=64=0.0% 00:20:15.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.197 complete : 0=0.0%, 4=87.3%, 8=12.5%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.197 issued rwts: total=2196,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:15.197 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:15.197 filename2: (groupid=0, jobs=1): err= 0: pid=82787: Wed Oct 16 09:34:37 2024 00:20:15.197 read: IOPS=217, BW=872KiB/s (893kB/s)(8788KiB/10081msec) 00:20:15.197 slat (usec): min=3, max=8034, avg=30.15, stdev=341.87 00:20:15.197 clat (msec): min=2, max=155, avg=73.09, stdev=22.86 00:20:15.197 lat (msec): min=2, max=155, avg=73.12, stdev=22.85 00:20:15.197 clat percentiles (msec): 00:20:15.197 | 1.00th=[ 12], 5.00th=[ 38], 10.00th=[ 48], 20.00th=[ 52], 00:20:15.197 | 30.00th=[ 62], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 73], 00:20:15.197 | 70.00th=[ 83], 80.00th=[ 94], 90.00th=[ 107], 95.00th=[ 111], 00:20:15.197 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 133], 99.95th=[ 142], 00:20:15.197 | 99.99th=[ 157] 00:20:15.197 bw ( KiB/s): min= 624, max= 1152, per=4.25%, avg=872.15, stdev=156.04, samples=20 00:20:15.197 iops : min= 156, max= 288, avg=218.00, stdev=39.00, samples=20 00:20:15.197 lat (msec) : 4=0.64%, 10=0.18%, 20=1.27%, 50=16.57%, 100=68.78% 00:20:15.197 lat (msec) : 250=12.56% 00:20:15.197 cpu : usr=32.19%, sys=1.19%, ctx=900, majf=0, minf=9 00:20:15.197 IO depths : 1=0.1%, 2=0.7%, 4=2.5%, 8=80.4%, 16=16.3%, 32=0.0%, >=64=0.0% 00:20:15.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.197 complete : 0=0.0%, 4=88.2%, 8=11.3%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.197 issued rwts: total=2197,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:15.197 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:15.197 filename2: (groupid=0, jobs=1): err= 0: pid=82788: Wed Oct 16 09:34:37 2024 00:20:15.197 read: IOPS=214, BW=856KiB/s (877kB/s)(8576KiB/10016msec) 00:20:15.197 slat (usec): min=4, max=8090, avg=35.99, stdev=356.72 00:20:15.197 clat (msec): min=34, max=170, avg=74.53, stdev=22.36 00:20:15.197 lat (msec): min=34, max=170, avg=74.57, stdev=22.36 00:20:15.197 clat percentiles (msec): 00:20:15.197 | 1.00th=[ 40], 5.00th=[ 45], 10.00th=[ 47], 20.00th=[ 50], 00:20:15.197 | 30.00th=[ 62], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 74], 00:20:15.197 | 70.00th=[ 88], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 115], 00:20:15.197 | 99.00th=[ 125], 99.50th=[ 126], 99.90th=[ 132], 99.95th=[ 171], 00:20:15.197 | 99.99th=[ 171] 00:20:15.197 bw ( KiB/s): min= 513, max= 1072, per=4.18%, avg=859.37, stdev=193.64, samples=19 00:20:15.197 iops : min= 128, max= 268, avg=214.79, stdev=48.41, samples=19 00:20:15.197 lat (msec) : 50=20.71%, 100=62.83%, 250=16.46% 00:20:15.197 cpu : usr=34.66%, sys=1.30%, ctx=941, majf=0, minf=9 00:20:15.197 IO depths : 1=0.1%, 2=1.4%, 4=5.7%, 8=77.6%, 16=15.3%, 32=0.0%, >=64=0.0% 00:20:15.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.197 complete : 0=0.0%, 4=88.6%, 8=10.1%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.197 issued rwts: total=2144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:15.197 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:15.197 filename2: (groupid=0, jobs=1): err= 0: pid=82789: Wed Oct 16 09:34:37 2024 00:20:15.197 read: IOPS=213, BW=854KiB/s (874kB/s)(8540KiB/10003msec) 00:20:15.197 slat (usec): min=3, max=8039, avg=34.87, stdev=331.45 00:20:15.197 clat (msec): min=4, max=226, avg=74.80, stdev=25.14 00:20:15.197 lat (msec): min=4, max=226, avg=74.84, stdev=25.14 00:20:15.197 clat percentiles (msec): 00:20:15.197 | 1.00th=[ 9], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 52], 00:20:15.197 | 30.00th=[ 64], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 77], 00:20:15.197 | 70.00th=[ 86], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 112], 00:20:15.197 | 99.00th=[ 123], 99.50th=[ 192], 99.90th=[ 192], 99.95th=[ 228], 00:20:15.197 | 99.99th=[ 228] 00:20:15.197 bw ( KiB/s): min= 496, max= 1080, per=4.07%, avg=836.95, stdev=187.82, samples=19 00:20:15.197 iops : min= 124, max= 270, avg=209.21, stdev=46.94, samples=19 00:20:15.197 lat (msec) : 10=1.50%, 20=0.42%, 50=17.05%, 100=66.09%, 250=14.94% 00:20:15.197 cpu : usr=34.62%, sys=1.53%, ctx=1003, majf=0, minf=9 00:20:15.197 IO depths : 1=0.1%, 2=2.1%, 4=8.2%, 8=74.9%, 16=14.8%, 32=0.0%, >=64=0.0% 00:20:15.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.197 complete : 0=0.0%, 4=89.2%, 8=9.0%, 16=1.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.197 issued rwts: total=2135,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:15.197 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:15.197 filename2: (groupid=0, jobs=1): err= 0: pid=82790: Wed Oct 16 09:34:37 2024 00:20:15.197 read: IOPS=212, BW=850KiB/s (870kB/s)(8516KiB/10019msec) 00:20:15.197 slat (usec): min=3, max=8038, avg=33.06, stdev=347.34 00:20:15.197 clat (msec): min=35, max=175, avg=75.09, stdev=22.26 00:20:15.197 lat (msec): min=35, max=175, avg=75.13, stdev=22.27 00:20:15.197 clat percentiles (msec): 00:20:15.197 | 1.00th=[ 38], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 48], 00:20:15.197 | 30.00th=[ 62], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 73], 00:20:15.197 | 70.00th=[ 86], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 116], 00:20:15.197 | 99.00th=[ 123], 99.50th=[ 128], 99.90th=[ 142], 99.95th=[ 176], 00:20:15.197 | 99.99th=[ 176] 00:20:15.197 bw ( KiB/s): min= 513, max= 1128, per=4.12%, avg=847.55, stdev=185.42, samples=20 00:20:15.197 iops : min= 128, max= 282, avg=211.85, stdev=46.36, samples=20 00:20:15.197 lat (msec) : 50=21.42%, 100=62.89%, 250=15.69% 00:20:15.197 cpu : usr=32.17%, sys=1.43%, ctx=935, majf=0, minf=9 00:20:15.197 IO depths : 1=0.1%, 2=1.6%, 4=6.2%, 8=77.0%, 16=15.3%, 32=0.0%, >=64=0.0% 00:20:15.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.197 complete : 0=0.0%, 4=88.8%, 8=9.9%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.197 issued rwts: total=2129,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:15.197 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:15.197 filename2: (groupid=0, jobs=1): err= 0: pid=82791: Wed Oct 16 09:34:37 2024 00:20:15.197 read: IOPS=215, BW=863KiB/s (883kB/s)(8676KiB/10058msec) 00:20:15.197 slat (usec): min=6, max=8033, avg=37.72, stdev=420.92 00:20:15.197 clat (msec): min=11, max=140, avg=73.92, stdev=21.82 00:20:15.197 lat (msec): min=11, max=140, avg=73.96, stdev=21.82 00:20:15.197 clat percentiles (msec): 00:20:15.197 | 1.00th=[ 14], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 59], 00:20:15.197 | 30.00th=[ 64], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 72], 00:20:15.197 | 70.00th=[ 83], 80.00th=[ 95], 90.00th=[ 108], 95.00th=[ 112], 00:20:15.197 | 99.00th=[ 125], 99.50th=[ 132], 99.90th=[ 133], 99.95th=[ 138], 00:20:15.197 | 99.99th=[ 140] 00:20:15.197 bw ( KiB/s): min= 656, max= 1040, per=4.19%, avg=860.85, stdev=138.43, samples=20 00:20:15.197 iops : min= 164, max= 260, avg=215.20, stdev=34.61, samples=20 00:20:15.197 lat (msec) : 20=1.48%, 50=15.54%, 100=69.25%, 250=13.74% 00:20:15.197 cpu : usr=32.39%, sys=1.43%, ctx=939, majf=0, minf=9 00:20:15.197 IO depths : 1=0.1%, 2=0.6%, 4=2.4%, 8=80.6%, 16=16.4%, 32=0.0%, >=64=0.0% 00:20:15.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.197 complete : 0=0.0%, 4=88.1%, 8=11.4%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.197 issued rwts: total=2169,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:15.197 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:15.197 00:20:15.197 Run status group 0 (all jobs): 00:20:15.197 READ: bw=20.1MiB/s (21.0MB/s), 721KiB/s-923KiB/s (738kB/s-945kB/s), io=202MiB (212MB), run=10001-10088msec 00:20:15.197 09:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:20:15.197 09:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:15.197 09:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:15.197 09:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:15.197 09:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:15.197 09:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:15.197 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.197 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:15.197 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.197 09:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:15.197 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.197 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:15.197 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.197 09:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:15.197 09:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:15.197 09:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:20:15.197 09:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:15.197 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:15.198 bdev_null0 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:15.198 [2024-10-16 09:34:37.788327] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:15.198 bdev_null1 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:15.198 { 00:20:15.198 "params": { 00:20:15.198 "name": "Nvme$subsystem", 00:20:15.198 "trtype": "$TEST_TRANSPORT", 00:20:15.198 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:15.198 "adrfam": "ipv4", 00:20:15.198 "trsvcid": "$NVMF_PORT", 00:20:15.198 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:15.198 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:15.198 "hdgst": ${hdgst:-false}, 00:20:15.198 "ddgst": ${ddgst:-false} 00:20:15.198 }, 00:20:15.198 "method": "bdev_nvme_attach_controller" 00:20:15.198 } 00:20:15.198 EOF 00:20:15.198 )") 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:15.198 { 00:20:15.198 "params": { 00:20:15.198 "name": "Nvme$subsystem", 00:20:15.198 "trtype": "$TEST_TRANSPORT", 00:20:15.198 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:15.198 "adrfam": "ipv4", 00:20:15.198 "trsvcid": "$NVMF_PORT", 00:20:15.198 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:15.198 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:15.198 "hdgst": ${hdgst:-false}, 00:20:15.198 "ddgst": ${ddgst:-false} 00:20:15.198 }, 00:20:15.198 "method": "bdev_nvme_attach_controller" 00:20:15.198 } 00:20:15.198 EOF 00:20:15.198 )") 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:20:15.198 "params": { 00:20:15.198 "name": "Nvme0", 00:20:15.198 "trtype": "tcp", 00:20:15.198 "traddr": "10.0.0.3", 00:20:15.198 "adrfam": "ipv4", 00:20:15.198 "trsvcid": "4420", 00:20:15.198 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:15.198 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:15.198 "hdgst": false, 00:20:15.198 "ddgst": false 00:20:15.198 }, 00:20:15.198 "method": "bdev_nvme_attach_controller" 00:20:15.198 },{ 00:20:15.198 "params": { 00:20:15.198 "name": "Nvme1", 00:20:15.198 "trtype": "tcp", 00:20:15.198 "traddr": "10.0.0.3", 00:20:15.198 "adrfam": "ipv4", 00:20:15.198 "trsvcid": "4420", 00:20:15.198 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:15.198 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:15.198 "hdgst": false, 00:20:15.198 "ddgst": false 00:20:15.198 }, 00:20:15.198 "method": "bdev_nvme_attach_controller" 00:20:15.198 }' 00:20:15.198 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:15.199 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:15.199 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:15.199 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:15.199 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:15.199 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:15.199 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:15.199 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:15.199 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:15.199 09:34:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:15.199 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:15.199 ... 00:20:15.199 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:15.199 ... 00:20:15.199 fio-3.35 00:20:15.199 Starting 4 threads 00:20:19.390 00:20:19.390 filename0: (groupid=0, jobs=1): err= 0: pid=82931: Wed Oct 16 09:34:43 2024 00:20:19.390 read: IOPS=2393, BW=18.7MiB/s (19.6MB/s)(93.6MiB/5004msec) 00:20:19.390 slat (usec): min=6, max=132, avg=13.88, stdev= 8.55 00:20:19.390 clat (usec): min=438, max=6851, avg=3306.98, stdev=995.44 00:20:19.390 lat (usec): min=449, max=6865, avg=3320.86, stdev=994.98 00:20:19.390 clat percentiles (usec): 00:20:19.390 | 1.00th=[ 1287], 5.00th=[ 1827], 10.00th=[ 1958], 20.00th=[ 2376], 00:20:19.390 | 30.00th=[ 2606], 40.00th=[ 2769], 50.00th=[ 3458], 60.00th=[ 3916], 00:20:19.390 | 70.00th=[ 4178], 80.00th=[ 4293], 90.00th=[ 4424], 95.00th=[ 4555], 00:20:19.390 | 99.00th=[ 4883], 99.50th=[ 5014], 99.90th=[ 6325], 99.95th=[ 6718], 00:20:19.390 | 99.99th=[ 6718] 00:20:19.390 bw ( KiB/s): min=17776, max=20576, per=26.79%, avg=19096.89, stdev=753.54, samples=9 00:20:19.390 iops : min= 2222, max= 2572, avg=2387.11, stdev=94.19, samples=9 00:20:19.390 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.05% 00:20:19.390 lat (msec) : 2=11.16%, 4=51.99%, 10=36.77% 00:20:19.390 cpu : usr=93.24%, sys=5.70%, ctx=30, majf=0, minf=0 00:20:19.390 IO depths : 1=0.2%, 2=2.4%, 4=62.4%, 8=35.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:19.390 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.390 complete : 0=0.0%, 4=99.1%, 8=0.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.390 issued rwts: total=11976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:19.390 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:19.390 filename0: (groupid=0, jobs=1): err= 0: pid=82932: Wed Oct 16 09:34:43 2024 00:20:19.390 read: IOPS=1801, BW=14.1MiB/s (14.8MB/s)(70.4MiB/5001msec) 00:20:19.390 slat (nsec): min=3770, max=79620, avg=14484.88, stdev=8978.43 00:20:19.390 clat (usec): min=1282, max=6793, avg=4378.97, stdev=413.27 00:20:19.390 lat (usec): min=1306, max=6813, avg=4393.46, stdev=411.42 00:20:19.390 clat percentiles (usec): 00:20:19.390 | 1.00th=[ 3130], 5.00th=[ 3752], 10.00th=[ 3884], 20.00th=[ 4146], 00:20:19.390 | 30.00th=[ 4293], 40.00th=[ 4359], 50.00th=[ 4424], 60.00th=[ 4490], 00:20:19.390 | 70.00th=[ 4555], 80.00th=[ 4621], 90.00th=[ 4817], 95.00th=[ 4948], 00:20:19.390 | 99.00th=[ 5407], 99.50th=[ 5538], 99.90th=[ 5735], 99.95th=[ 5866], 00:20:19.390 | 99.99th=[ 6783] 00:20:19.390 bw ( KiB/s): min=13696, max=16095, per=20.21%, avg=14405.22, stdev=858.33, samples=9 00:20:19.390 iops : min= 1712, max= 2011, avg=1800.56, stdev=107.08, samples=9 00:20:19.390 lat (msec) : 2=0.31%, 4=15.52%, 10=84.17% 00:20:19.390 cpu : usr=93.00%, sys=6.06%, ctx=12, majf=0, minf=0 00:20:19.390 IO depths : 1=0.6%, 2=24.6%, 4=50.2%, 8=24.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:19.390 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.390 complete : 0=0.0%, 4=90.1%, 8=9.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.390 issued rwts: total=9009,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:19.390 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:19.390 filename1: (groupid=0, jobs=1): err= 0: pid=82933: Wed Oct 16 09:34:43 2024 00:20:19.390 read: IOPS=2350, BW=18.4MiB/s (19.3MB/s)(91.9MiB/5003msec) 00:20:19.390 slat (usec): min=6, max=131, avg=17.21, stdev= 7.88 00:20:19.390 clat (usec): min=489, max=6490, avg=3359.34, stdev=973.33 00:20:19.390 lat (usec): min=511, max=6500, avg=3376.55, stdev=973.50 00:20:19.390 clat percentiles (usec): 00:20:19.390 | 1.00th=[ 1287], 5.00th=[ 1876], 10.00th=[ 2024], 20.00th=[ 2409], 00:20:19.390 | 30.00th=[ 2606], 40.00th=[ 2802], 50.00th=[ 3654], 60.00th=[ 3949], 00:20:19.390 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4424], 95.00th=[ 4555], 00:20:19.390 | 99.00th=[ 4948], 99.50th=[ 5080], 99.90th=[ 5997], 99.95th=[ 6063], 00:20:19.390 | 99.99th=[ 6456] 00:20:19.390 bw ( KiB/s): min=16817, max=19680, per=26.25%, avg=18713.00, stdev=949.63, samples=9 00:20:19.390 iops : min= 2102, max= 2460, avg=2339.11, stdev=118.74, samples=9 00:20:19.390 lat (usec) : 500=0.01%, 1000=0.01% 00:20:19.390 lat (msec) : 2=9.31%, 4=52.02%, 10=38.65% 00:20:19.390 cpu : usr=94.36%, sys=4.60%, ctx=5, majf=0, minf=0 00:20:19.390 IO depths : 1=0.3%, 2=3.5%, 4=61.8%, 8=34.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:19.390 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.390 complete : 0=0.0%, 4=98.7%, 8=1.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.390 issued rwts: total=11758,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:19.390 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:19.390 filename1: (groupid=0, jobs=1): err= 0: pid=82934: Wed Oct 16 09:34:43 2024 00:20:19.390 read: IOPS=2368, BW=18.5MiB/s (19.4MB/s)(92.5MiB/5001msec) 00:20:19.390 slat (usec): min=3, max=132, avg=17.00, stdev= 8.15 00:20:19.390 clat (usec): min=635, max=6044, avg=3332.84, stdev=969.32 00:20:19.390 lat (usec): min=647, max=6051, avg=3349.85, stdev=969.11 00:20:19.391 clat percentiles (usec): 00:20:19.391 | 1.00th=[ 1270], 5.00th=[ 1893], 10.00th=[ 2008], 20.00th=[ 2376], 00:20:19.391 | 30.00th=[ 2573], 40.00th=[ 2769], 50.00th=[ 3556], 60.00th=[ 3916], 00:20:19.391 | 70.00th=[ 4178], 80.00th=[ 4293], 90.00th=[ 4424], 95.00th=[ 4555], 00:20:19.391 | 99.00th=[ 4883], 99.50th=[ 5014], 99.90th=[ 5342], 99.95th=[ 5473], 00:20:19.391 | 99.99th=[ 5800] 00:20:19.391 bw ( KiB/s): min=16095, max=20560, per=26.45%, avg=18851.44, stdev=1282.66, samples=9 00:20:19.391 iops : min= 2011, max= 2570, avg=2356.33, stdev=160.57, samples=9 00:20:19.391 lat (usec) : 750=0.02%, 1000=0.12% 00:20:19.391 lat (msec) : 2=9.46%, 4=53.54%, 10=36.87% 00:20:19.391 cpu : usr=94.08%, sys=4.92%, ctx=8, majf=0, minf=9 00:20:19.391 IO depths : 1=0.2%, 2=3.4%, 4=61.9%, 8=34.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:19.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.391 complete : 0=0.0%, 4=98.7%, 8=1.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.391 issued rwts: total=11845,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:19.391 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:19.391 00:20:19.391 Run status group 0 (all jobs): 00:20:19.391 READ: bw=69.6MiB/s (73.0MB/s), 14.1MiB/s-18.7MiB/s (14.8MB/s-19.6MB/s), io=348MiB (365MB), run=5001-5004msec 00:20:19.650 09:34:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:20:19.650 09:34:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:19.650 09:34:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:19.650 09:34:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:19.650 09:34:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:19.650 09:34:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:19.650 09:34:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.650 09:34:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:19.650 09:34:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.650 09:34:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:19.650 09:34:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.650 09:34:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:19.650 09:34:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.650 09:34:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:19.650 09:34:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:19.650 09:34:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:20:19.650 09:34:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:19.650 09:34:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.650 09:34:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:19.650 09:34:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.650 09:34:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:19.650 09:34:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.650 09:34:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:19.650 09:34:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.650 00:20:19.650 real 0m23.661s 00:20:19.650 user 2m5.801s 00:20:19.650 sys 0m6.856s 00:20:19.650 09:34:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:19.650 09:34:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:19.650 ************************************ 00:20:19.650 END TEST fio_dif_rand_params 00:20:19.650 ************************************ 00:20:19.650 09:34:44 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:20:19.650 09:34:44 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:19.650 09:34:44 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:19.650 09:34:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:19.910 ************************************ 00:20:19.910 START TEST fio_dif_digest 00:20:19.910 ************************************ 00:20:19.910 09:34:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:20:19.910 09:34:44 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:20:19.910 09:34:44 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:20:19.910 09:34:44 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:20:19.910 09:34:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:20:19.910 09:34:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:20:19.910 09:34:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:20:19.910 09:34:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:20:19.910 09:34:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:20:19.910 09:34:44 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:20:19.910 09:34:44 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:20:19.910 09:34:44 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:20:19.910 09:34:44 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:20:19.910 09:34:44 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:20:19.910 09:34:44 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:20:19.910 09:34:44 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:20:19.910 09:34:44 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:19.910 09:34:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.910 09:34:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:19.910 bdev_null0 00:20:19.910 09:34:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.910 09:34:44 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:19.910 09:34:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.910 09:34:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:19.910 09:34:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.910 09:34:44 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:19.910 09:34:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.910 09:34:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:19.910 09:34:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.910 09:34:44 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:19.910 09:34:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.910 09:34:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:19.910 [2024-10-16 09:34:44.095974] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:19.910 09:34:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.910 09:34:44 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:20:19.910 09:34:44 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:20:19.910 09:34:44 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:19.910 09:34:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # config=() 00:20:19.910 09:34:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # local subsystem config 00:20:19.910 09:34:44 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:19.910 09:34:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:19.910 09:34:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:19.910 { 00:20:19.910 "params": { 00:20:19.910 "name": "Nvme$subsystem", 00:20:19.910 "trtype": "$TEST_TRANSPORT", 00:20:19.910 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:19.910 "adrfam": "ipv4", 00:20:19.910 "trsvcid": "$NVMF_PORT", 00:20:19.910 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:19.910 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:19.910 "hdgst": ${hdgst:-false}, 00:20:19.910 "ddgst": ${ddgst:-false} 00:20:19.910 }, 00:20:19.910 "method": "bdev_nvme_attach_controller" 00:20:19.910 } 00:20:19.910 EOF 00:20:19.910 )") 00:20:19.910 09:34:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:19.910 09:34:44 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:20:19.910 09:34:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:19.911 09:34:44 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:20:19.911 09:34:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:19.911 09:34:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:19.911 09:34:44 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:20:19.911 09:34:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:19.911 09:34:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:20:19.911 09:34:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:19.911 09:34:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:19.911 09:34:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # cat 00:20:19.911 09:34:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:19.911 09:34:44 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:20:19.911 09:34:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:20:19.911 09:34:44 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:20:19.911 09:34:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:19.911 09:34:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # jq . 00:20:19.911 09:34:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@583 -- # IFS=, 00:20:19.911 09:34:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:20:19.911 "params": { 00:20:19.911 "name": "Nvme0", 00:20:19.911 "trtype": "tcp", 00:20:19.911 "traddr": "10.0.0.3", 00:20:19.911 "adrfam": "ipv4", 00:20:19.911 "trsvcid": "4420", 00:20:19.911 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:19.911 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:19.911 "hdgst": true, 00:20:19.911 "ddgst": true 00:20:19.911 }, 00:20:19.911 "method": "bdev_nvme_attach_controller" 00:20:19.911 }' 00:20:19.911 09:34:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:19.911 09:34:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:19.911 09:34:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:19.911 09:34:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:19.911 09:34:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:19.911 09:34:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:19.911 09:34:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:19.911 09:34:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:19.911 09:34:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:19.911 09:34:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:20.170 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:20.170 ... 00:20:20.170 fio-3.35 00:20:20.170 Starting 3 threads 00:20:32.401 00:20:32.401 filename0: (groupid=0, jobs=1): err= 0: pid=83040: Wed Oct 16 09:34:54 2024 00:20:32.401 read: IOPS=258, BW=32.3MiB/s (33.9MB/s)(323MiB/10008msec) 00:20:32.401 slat (nsec): min=6159, max=63656, avg=10637.97, stdev=5355.98 00:20:32.401 clat (usec): min=8503, max=13850, avg=11583.49, stdev=464.92 00:20:32.401 lat (usec): min=8513, max=13864, avg=11594.13, stdev=465.05 00:20:32.401 clat percentiles (usec): 00:20:32.401 | 1.00th=[11076], 5.00th=[11207], 10.00th=[11207], 20.00th=[11338], 00:20:32.401 | 30.00th=[11338], 40.00th=[11338], 50.00th=[11469], 60.00th=[11469], 00:20:32.401 | 70.00th=[11600], 80.00th=[11994], 90.00th=[12256], 95.00th=[12518], 00:20:32.401 | 99.00th=[13042], 99.50th=[13304], 99.90th=[13829], 99.95th=[13829], 00:20:32.401 | 99.99th=[13829] 00:20:32.401 bw ( KiB/s): min=32256, max=33792, per=33.36%, avg=33064.42, stdev=699.85, samples=19 00:20:32.401 iops : min= 252, max= 264, avg=258.32, stdev= 5.47, samples=19 00:20:32.401 lat (msec) : 10=0.35%, 20=99.65% 00:20:32.401 cpu : usr=94.76%, sys=4.70%, ctx=15, majf=0, minf=9 00:20:32.401 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:32.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:32.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:32.401 issued rwts: total=2586,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:32.401 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:32.401 filename0: (groupid=0, jobs=1): err= 0: pid=83041: Wed Oct 16 09:34:54 2024 00:20:32.401 read: IOPS=258, BW=32.3MiB/s (33.9MB/s)(323MiB/10001msec) 00:20:32.401 slat (nsec): min=3927, max=69826, avg=13097.04, stdev=8532.56 00:20:32.401 clat (usec): min=9637, max=13502, avg=11580.69, stdev=440.27 00:20:32.401 lat (usec): min=9641, max=13521, avg=11593.79, stdev=440.75 00:20:32.401 clat percentiles (usec): 00:20:32.401 | 1.00th=[11076], 5.00th=[11207], 10.00th=[11207], 20.00th=[11338], 00:20:32.401 | 30.00th=[11338], 40.00th=[11338], 50.00th=[11469], 60.00th=[11469], 00:20:32.401 | 70.00th=[11600], 80.00th=[11994], 90.00th=[12256], 95.00th=[12518], 00:20:32.401 | 99.00th=[13042], 99.50th=[13173], 99.90th=[13435], 99.95th=[13435], 00:20:32.401 | 99.99th=[13566] 00:20:32.401 bw ( KiB/s): min=32256, max=33792, per=33.36%, avg=33064.42, stdev=477.13, samples=19 00:20:32.401 iops : min= 252, max= 264, avg=258.32, stdev= 3.73, samples=19 00:20:32.401 lat (msec) : 10=0.12%, 20=99.88% 00:20:32.401 cpu : usr=93.64%, sys=5.68%, ctx=21, majf=0, minf=0 00:20:32.401 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:32.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:32.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:32.401 issued rwts: total=2583,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:32.401 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:32.401 filename0: (groupid=0, jobs=1): err= 0: pid=83042: Wed Oct 16 09:34:54 2024 00:20:32.401 read: IOPS=258, BW=32.3MiB/s (33.8MB/s)(323MiB/10002msec) 00:20:32.401 slat (nsec): min=5295, max=71204, avg=13083.08, stdev=7786.82 00:20:32.401 clat (usec): min=1442, max=17610, avg=11590.90, stdev=558.82 00:20:32.401 lat (usec): min=1453, max=17631, avg=11603.98, stdev=559.13 00:20:32.401 clat percentiles (usec): 00:20:32.401 | 1.00th=[11076], 5.00th=[11207], 10.00th=[11207], 20.00th=[11338], 00:20:32.401 | 30.00th=[11338], 40.00th=[11338], 50.00th=[11469], 60.00th=[11469], 00:20:32.401 | 70.00th=[11600], 80.00th=[11994], 90.00th=[12256], 95.00th=[12518], 00:20:32.401 | 99.00th=[13173], 99.50th=[13304], 99.90th=[17695], 99.95th=[17695], 00:20:32.401 | 99.99th=[17695] 00:20:32.401 bw ( KiB/s): min=32256, max=33792, per=33.32%, avg=33024.00, stdev=362.04, samples=19 00:20:32.401 iops : min= 252, max= 264, avg=258.00, stdev= 2.83, samples=19 00:20:32.401 lat (msec) : 2=0.04%, 10=0.12%, 20=99.85% 00:20:32.401 cpu : usr=94.65%, sys=4.72%, ctx=27, majf=0, minf=0 00:20:32.401 IO depths : 1=33.4%, 2=66.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:32.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:32.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:32.401 issued rwts: total=2581,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:32.401 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:32.401 00:20:32.402 Run status group 0 (all jobs): 00:20:32.402 READ: bw=96.8MiB/s (101MB/s), 32.3MiB/s-32.3MiB/s (33.8MB/s-33.9MB/s), io=969MiB (1016MB), run=10001-10008msec 00:20:32.402 09:34:55 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:20:32.402 09:34:55 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:20:32.402 09:34:55 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:20:32.402 09:34:55 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:32.402 09:34:55 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:20:32.402 09:34:55 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:32.402 09:34:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.402 09:34:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:32.402 09:34:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.402 09:34:55 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:32.402 09:34:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.402 09:34:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:32.402 09:34:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.402 00:20:32.402 real 0m10.997s 00:20:32.402 user 0m28.949s 00:20:32.402 sys 0m1.800s 00:20:32.402 09:34:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:32.402 09:34:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:32.402 ************************************ 00:20:32.402 END TEST fio_dif_digest 00:20:32.402 ************************************ 00:20:32.402 09:34:55 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:20:32.402 09:34:55 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:20:32.402 09:34:55 nvmf_dif -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:32.402 09:34:55 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:20:32.402 09:34:55 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:32.402 09:34:55 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:20:32.402 09:34:55 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:32.402 09:34:55 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:32.402 rmmod nvme_tcp 00:20:32.402 rmmod nvme_fabrics 00:20:32.402 rmmod nvme_keyring 00:20:32.402 09:34:55 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:32.402 09:34:55 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:20:32.402 09:34:55 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:20:32.402 09:34:55 nvmf_dif -- nvmf/common.sh@515 -- # '[' -n 82294 ']' 00:20:32.402 09:34:55 nvmf_dif -- nvmf/common.sh@516 -- # killprocess 82294 00:20:32.402 09:34:55 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 82294 ']' 00:20:32.402 09:34:55 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 82294 00:20:32.402 09:34:55 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:20:32.402 09:34:55 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:32.402 09:34:55 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82294 00:20:32.402 killing process with pid 82294 00:20:32.402 09:34:55 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:32.402 09:34:55 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:32.402 09:34:55 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82294' 00:20:32.402 09:34:55 nvmf_dif -- common/autotest_common.sh@969 -- # kill 82294 00:20:32.402 09:34:55 nvmf_dif -- common/autotest_common.sh@974 -- # wait 82294 00:20:32.402 09:34:55 nvmf_dif -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:20:32.402 09:34:55 nvmf_dif -- nvmf/common.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:32.402 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:32.402 Waiting for block devices as requested 00:20:32.402 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:32.402 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:32.402 09:34:55 nvmf_dif -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:32.402 09:34:55 nvmf_dif -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:32.402 09:34:55 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:20:32.402 09:34:55 nvmf_dif -- nvmf/common.sh@789 -- # iptables-save 00:20:32.402 09:34:55 nvmf_dif -- nvmf/common.sh@789 -- # iptables-restore 00:20:32.402 09:34:55 nvmf_dif -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:32.402 09:34:55 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:32.402 09:34:55 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:32.402 09:34:55 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:32.402 09:34:55 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:32.402 09:34:56 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:32.402 09:34:56 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:32.402 09:34:56 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:32.402 09:34:56 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:32.402 09:34:56 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:32.402 09:34:56 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:32.402 09:34:56 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:32.402 09:34:56 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:32.402 09:34:56 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:32.402 09:34:56 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:32.402 09:34:56 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:32.402 09:34:56 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:32.402 09:34:56 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:32.402 09:34:56 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:32.402 09:34:56 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:32.402 09:34:56 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:20:32.402 ************************************ 00:20:32.402 END TEST nvmf_dif 00:20:32.402 ************************************ 00:20:32.402 00:20:32.402 real 0m59.585s 00:20:32.402 user 3m49.686s 00:20:32.402 sys 0m17.797s 00:20:32.402 09:34:56 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:32.402 09:34:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:32.402 09:34:56 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:20:32.402 09:34:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:32.402 09:34:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:32.402 09:34:56 -- common/autotest_common.sh@10 -- # set +x 00:20:32.402 ************************************ 00:20:32.402 START TEST nvmf_abort_qd_sizes 00:20:32.402 ************************************ 00:20:32.402 09:34:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:20:32.402 * Looking for test storage... 00:20:32.402 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:32.402 09:34:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:32.402 09:34:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:20:32.402 09:34:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:32.402 09:34:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:32.402 09:34:56 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:32.402 09:34:56 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:32.402 09:34:56 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:32.402 09:34:56 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:32.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:32.403 --rc genhtml_branch_coverage=1 00:20:32.403 --rc genhtml_function_coverage=1 00:20:32.403 --rc genhtml_legend=1 00:20:32.403 --rc geninfo_all_blocks=1 00:20:32.403 --rc geninfo_unexecuted_blocks=1 00:20:32.403 00:20:32.403 ' 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:32.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:32.403 --rc genhtml_branch_coverage=1 00:20:32.403 --rc genhtml_function_coverage=1 00:20:32.403 --rc genhtml_legend=1 00:20:32.403 --rc geninfo_all_blocks=1 00:20:32.403 --rc geninfo_unexecuted_blocks=1 00:20:32.403 00:20:32.403 ' 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:32.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:32.403 --rc genhtml_branch_coverage=1 00:20:32.403 --rc genhtml_function_coverage=1 00:20:32.403 --rc genhtml_legend=1 00:20:32.403 --rc geninfo_all_blocks=1 00:20:32.403 --rc geninfo_unexecuted_blocks=1 00:20:32.403 00:20:32.403 ' 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:32.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:32.403 --rc genhtml_branch_coverage=1 00:20:32.403 --rc genhtml_function_coverage=1 00:20:32.403 --rc genhtml_legend=1 00:20:32.403 --rc geninfo_all_blocks=1 00:20:32.403 --rc geninfo_unexecuted_blocks=1 00:20:32.403 00:20:32.403 ' 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5989d9e2-d339-420e-a2f4-bd87604f111f 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:32.403 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:32.403 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@458 -- # nvmf_veth_init 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:32.404 Cannot find device "nvmf_init_br" 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:32.404 Cannot find device "nvmf_init_br2" 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:32.404 Cannot find device "nvmf_tgt_br" 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:32.404 Cannot find device "nvmf_tgt_br2" 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:32.404 Cannot find device "nvmf_init_br" 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:32.404 Cannot find device "nvmf_init_br2" 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:32.404 Cannot find device "nvmf_tgt_br" 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:32.404 Cannot find device "nvmf_tgt_br2" 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:32.404 Cannot find device "nvmf_br" 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:32.404 Cannot find device "nvmf_init_if" 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:32.404 Cannot find device "nvmf_init_if2" 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:32.404 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:32.404 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:32.404 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:32.663 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:32.663 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:32.663 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:32.663 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:32.663 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:32.663 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:32.663 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:32.663 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:32.663 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:32.664 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:32.664 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.129 ms 00:20:32.664 00:20:32.664 --- 10.0.0.3 ping statistics --- 00:20:32.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:32.664 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:20:32.664 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:32.664 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:32.664 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:20:32.664 00:20:32.664 --- 10.0.0.4 ping statistics --- 00:20:32.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:32.664 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:20:32.664 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:32.664 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:32.664 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:20:32.664 00:20:32.664 --- 10.0.0.1 ping statistics --- 00:20:32.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:32.664 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:20:32.664 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:32.664 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:32.664 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:20:32.664 00:20:32.664 --- 10.0.0.2 ping statistics --- 00:20:32.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:32.664 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:20:32.664 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:32.664 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # return 0 00:20:32.664 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:20:32.664 09:34:56 nvmf_abort_qd_sizes -- nvmf/common.sh@477 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:33.290 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:33.290 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:33.548 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:33.548 09:34:57 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:33.548 09:34:57 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:33.548 09:34:57 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:33.548 09:34:57 nvmf_abort_qd_sizes -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:33.548 09:34:57 nvmf_abort_qd_sizes -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:33.548 09:34:57 nvmf_abort_qd_sizes -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:33.548 09:34:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:20:33.548 09:34:57 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:33.548 09:34:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:33.548 09:34:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:33.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:33.548 09:34:57 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # nvmfpid=83698 00:20:33.548 09:34:57 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:20:33.548 09:34:57 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # waitforlisten 83698 00:20:33.548 09:34:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 83698 ']' 00:20:33.548 09:34:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:33.548 09:34:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:33.548 09:34:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:33.548 09:34:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:33.548 09:34:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:33.548 [2024-10-16 09:34:57.863126] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:20:33.548 [2024-10-16 09:34:57.863399] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:33.806 [2024-10-16 09:34:58.006076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:33.806 [2024-10-16 09:34:58.065234] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:33.806 [2024-10-16 09:34:58.065570] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:33.806 [2024-10-16 09:34:58.065790] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:33.806 [2024-10-16 09:34:58.065945] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:33.806 [2024-10-16 09:34:58.065992] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:33.806 [2024-10-16 09:34:58.067320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:33.806 [2024-10-16 09:34:58.067460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:33.806 [2024-10-16 09:34:58.068242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:33.806 [2024-10-16 09:34:58.068243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:33.806 [2024-10-16 09:34:58.126336] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:33.806 09:34:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:33.806 09:34:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:20:33.806 09:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:33.806 09:34:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:33.806 09:34:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:34.065 09:34:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:34.065 ************************************ 00:20:34.065 START TEST spdk_target_abort 00:20:34.065 ************************************ 00:20:34.065 09:34:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:20:34.065 09:34:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:20:34.065 09:34:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:20:34.065 09:34:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.065 09:34:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:34.065 spdk_targetn1 00:20:34.065 09:34:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.066 09:34:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:34.066 09:34:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.066 09:34:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:34.066 [2024-10-16 09:34:58.370518] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:34.066 09:34:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.066 09:34:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:20:34.066 09:34:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.066 09:34:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:34.066 09:34:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.066 09:34:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:20:34.066 09:34:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.066 09:34:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:34.066 09:34:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.066 09:34:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:20:34.066 09:34:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.066 09:34:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:34.066 [2024-10-16 09:34:58.412499] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:34.066 09:34:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.066 09:34:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:20:34.066 09:34:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:20:34.066 09:34:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:20:34.066 09:34:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:20:34.066 09:34:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:20:34.066 09:34:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:20:34.066 09:34:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:20:34.066 09:34:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:20:34.066 09:34:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:20:34.066 09:34:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:34.066 09:34:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:20:34.066 09:34:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:34.066 09:34:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:20:34.066 09:34:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:34.066 09:34:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:20:34.066 09:34:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:34.066 09:34:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:20:34.066 09:34:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:34.066 09:34:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:34.066 09:34:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:34.066 09:34:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:37.351 Initializing NVMe Controllers 00:20:37.351 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:20:37.351 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:37.351 Initialization complete. Launching workers. 00:20:37.351 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9478, failed: 0 00:20:37.351 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1094, failed to submit 8384 00:20:37.351 success 934, unsuccessful 160, failed 0 00:20:37.351 09:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:37.351 09:35:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:40.636 Initializing NVMe Controllers 00:20:40.636 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:20:40.636 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:40.636 Initialization complete. Launching workers. 00:20:40.636 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9012, failed: 0 00:20:40.636 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1192, failed to submit 7820 00:20:40.636 success 396, unsuccessful 796, failed 0 00:20:40.636 09:35:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:40.636 09:35:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:43.927 Initializing NVMe Controllers 00:20:43.927 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:20:43.927 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:43.927 Initialization complete. Launching workers. 00:20:43.927 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30673, failed: 0 00:20:43.927 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2284, failed to submit 28389 00:20:43.927 success 457, unsuccessful 1827, failed 0 00:20:43.927 09:35:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:20:43.927 09:35:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.927 09:35:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:43.927 09:35:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.927 09:35:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:20:43.927 09:35:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.927 09:35:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:44.495 09:35:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.495 09:35:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 83698 00:20:44.495 09:35:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 83698 ']' 00:20:44.495 09:35:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 83698 00:20:44.495 09:35:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:20:44.495 09:35:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:44.495 09:35:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83698 00:20:44.495 killing process with pid 83698 00:20:44.495 09:35:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:44.495 09:35:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:44.495 09:35:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83698' 00:20:44.495 09:35:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 83698 00:20:44.495 09:35:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 83698 00:20:44.754 ************************************ 00:20:44.754 END TEST spdk_target_abort 00:20:44.754 ************************************ 00:20:44.754 00:20:44.754 real 0m10.727s 00:20:44.754 user 0m41.607s 00:20:44.754 sys 0m1.778s 00:20:44.754 09:35:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:44.754 09:35:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:44.754 09:35:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:20:44.754 09:35:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:44.754 09:35:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:44.754 09:35:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:44.754 ************************************ 00:20:44.754 START TEST kernel_target_abort 00:20:44.754 ************************************ 00:20:44.754 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:20:44.754 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:20:44.754 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@767 -- # local ip 00:20:44.754 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates=() 00:20:44.754 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # local -A ip_candidates 00:20:44.754 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:44.754 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:44.754 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:20:44.754 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:44.754 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:20:44.754 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:20:44.754 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:20:44.754 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:20:44.754 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:20:44.754 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:20:44.754 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:44.754 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:44.754 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:44.754 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # local block nvme 00:20:44.754 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:20:44.754 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # modprobe nvmet 00:20:44.754 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:44.754 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:45.321 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:45.321 Waiting for block devices as requested 00:20:45.321 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:45.321 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:45.321 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:20:45.321 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:45.321 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:20:45.321 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:20:45.321 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:45.321 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:45.321 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:20:45.321 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:20:45.321 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:45.321 No valid GPT data, bailing 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n2 ]] 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme0n2 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme0n2 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:20:45.581 No valid GPT data, bailing 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n2 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n3 ]] 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme0n3 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme0n3 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:20:45.581 No valid GPT data, bailing 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n3 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme1n1 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme1n1 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:45.581 No valid GPT data, bailing 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme1n1 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # [[ -b /dev/nvme1n1 ]] 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@694 -- # echo /dev/nvme1n1 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo tcp 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 4420 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo ipv4 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f --hostid=5989d9e2-d339-420e-a2f4-bd87604f111f -a 10.0.0.1 -t tcp -s 4420 00:20:45.581 00:20:45.581 Discovery Log Number of Records 2, Generation counter 2 00:20:45.581 =====Discovery Log Entry 0====== 00:20:45.581 trtype: tcp 00:20:45.581 adrfam: ipv4 00:20:45.581 subtype: current discovery subsystem 00:20:45.581 treq: not specified, sq flow control disable supported 00:20:45.581 portid: 1 00:20:45.581 trsvcid: 4420 00:20:45.581 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:45.581 traddr: 10.0.0.1 00:20:45.581 eflags: none 00:20:45.581 sectype: none 00:20:45.581 =====Discovery Log Entry 1====== 00:20:45.581 trtype: tcp 00:20:45.581 adrfam: ipv4 00:20:45.581 subtype: nvme subsystem 00:20:45.581 treq: not specified, sq flow control disable supported 00:20:45.581 portid: 1 00:20:45.581 trsvcid: 4420 00:20:45.581 subnqn: nqn.2016-06.io.spdk:testnqn 00:20:45.581 traddr: 10.0.0.1 00:20:45.581 eflags: none 00:20:45.581 sectype: none 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:45.581 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:20:45.582 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:45.582 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:20:45.582 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:45.582 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:20:45.582 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:45.582 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:45.582 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:45.582 09:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:48.870 Initializing NVMe Controllers 00:20:48.870 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:48.870 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:48.870 Initialization complete. Launching workers. 00:20:48.870 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33366, failed: 0 00:20:48.870 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33366, failed to submit 0 00:20:48.870 success 0, unsuccessful 33366, failed 0 00:20:48.870 09:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:48.870 09:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:52.175 Initializing NVMe Controllers 00:20:52.175 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:52.175 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:52.175 Initialization complete. Launching workers. 00:20:52.175 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 64404, failed: 0 00:20:52.175 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 26075, failed to submit 38329 00:20:52.175 success 0, unsuccessful 26075, failed 0 00:20:52.175 09:35:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:52.175 09:35:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:55.460 Initializing NVMe Controllers 00:20:55.460 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:55.460 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:55.460 Initialization complete. Launching workers. 00:20:55.460 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 69604, failed: 0 00:20:55.460 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 17382, failed to submit 52222 00:20:55.460 success 0, unsuccessful 17382, failed 0 00:20:55.460 09:35:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:20:55.460 09:35:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:20:55.460 09:35:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # echo 0 00:20:55.460 09:35:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:55.460 09:35:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:55.460 09:35:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:55.460 09:35:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:55.460 09:35:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:20:55.460 09:35:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:20:55.460 09:35:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@724 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:56.025 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:57.401 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:57.401 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:57.401 ************************************ 00:20:57.401 END TEST kernel_target_abort 00:20:57.401 ************************************ 00:20:57.401 00:20:57.401 real 0m12.556s 00:20:57.401 user 0m5.465s 00:20:57.401 sys 0m4.353s 00:20:57.401 09:35:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:57.401 09:35:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:57.401 09:35:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:20:57.401 09:35:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:20:57.401 09:35:21 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:57.401 09:35:21 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:20:57.401 09:35:21 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:57.401 09:35:21 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:20:57.401 09:35:21 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:57.401 09:35:21 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:57.401 rmmod nvme_tcp 00:20:57.401 rmmod nvme_fabrics 00:20:57.401 rmmod nvme_keyring 00:20:57.401 09:35:21 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:57.401 09:35:21 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:20:57.401 09:35:21 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:20:57.401 09:35:21 nvmf_abort_qd_sizes -- nvmf/common.sh@515 -- # '[' -n 83698 ']' 00:20:57.401 09:35:21 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # killprocess 83698 00:20:57.401 09:35:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 83698 ']' 00:20:57.401 Process with pid 83698 is not found 00:20:57.401 09:35:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 83698 00:20:57.401 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (83698) - No such process 00:20:57.401 09:35:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 83698 is not found' 00:20:57.401 09:35:21 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:20:57.401 09:35:21 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:57.968 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:57.968 Waiting for block devices as requested 00:20:57.968 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:57.968 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:57.968 09:35:22 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:57.968 09:35:22 nvmf_abort_qd_sizes -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:57.968 09:35:22 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:20:57.968 09:35:22 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:57.968 09:35:22 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-save 00:20:57.969 09:35:22 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-restore 00:20:57.969 09:35:22 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:57.969 09:35:22 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:57.969 09:35:22 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:58.229 09:35:22 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:58.229 09:35:22 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:58.229 09:35:22 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:58.229 09:35:22 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:58.229 09:35:22 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:58.229 09:35:22 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:58.229 09:35:22 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:58.229 09:35:22 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:58.229 09:35:22 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:58.229 09:35:22 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:58.229 09:35:22 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:58.229 09:35:22 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:58.229 09:35:22 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:58.229 09:35:22 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:58.229 09:35:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:58.229 09:35:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:58.229 09:35:22 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:20:58.229 00:20:58.229 real 0m26.317s 00:20:58.229 user 0m48.203s 00:20:58.229 sys 0m7.505s 00:20:58.229 09:35:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:58.229 ************************************ 00:20:58.229 09:35:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:58.229 END TEST nvmf_abort_qd_sizes 00:20:58.229 ************************************ 00:20:58.490 09:35:22 -- spdk/autotest.sh@288 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:20:58.490 09:35:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:58.490 09:35:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:58.490 09:35:22 -- common/autotest_common.sh@10 -- # set +x 00:20:58.490 ************************************ 00:20:58.490 START TEST keyring_file 00:20:58.490 ************************************ 00:20:58.490 09:35:22 keyring_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:20:58.490 * Looking for test storage... 00:20:58.490 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:20:58.490 09:35:22 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:58.490 09:35:22 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:20:58.490 09:35:22 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:58.490 09:35:22 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:58.490 09:35:22 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:58.490 09:35:22 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:58.490 09:35:22 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:58.490 09:35:22 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:20:58.490 09:35:22 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:20:58.490 09:35:22 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:20:58.490 09:35:22 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:20:58.490 09:35:22 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:20:58.490 09:35:22 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:20:58.490 09:35:22 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:20:58.490 09:35:22 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:58.490 09:35:22 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:20:58.490 09:35:22 keyring_file -- scripts/common.sh@345 -- # : 1 00:20:58.490 09:35:22 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:58.490 09:35:22 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:58.490 09:35:22 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:20:58.490 09:35:22 keyring_file -- scripts/common.sh@353 -- # local d=1 00:20:58.490 09:35:22 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:58.490 09:35:22 keyring_file -- scripts/common.sh@355 -- # echo 1 00:20:58.490 09:35:22 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:20:58.490 09:35:22 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:20:58.490 09:35:22 keyring_file -- scripts/common.sh@353 -- # local d=2 00:20:58.490 09:35:22 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:58.490 09:35:22 keyring_file -- scripts/common.sh@355 -- # echo 2 00:20:58.490 09:35:22 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:20:58.490 09:35:22 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:58.490 09:35:22 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:58.490 09:35:22 keyring_file -- scripts/common.sh@368 -- # return 0 00:20:58.490 09:35:22 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:58.490 09:35:22 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:58.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.490 --rc genhtml_branch_coverage=1 00:20:58.490 --rc genhtml_function_coverage=1 00:20:58.490 --rc genhtml_legend=1 00:20:58.490 --rc geninfo_all_blocks=1 00:20:58.490 --rc geninfo_unexecuted_blocks=1 00:20:58.490 00:20:58.490 ' 00:20:58.490 09:35:22 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:58.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.490 --rc genhtml_branch_coverage=1 00:20:58.490 --rc genhtml_function_coverage=1 00:20:58.490 --rc genhtml_legend=1 00:20:58.490 --rc geninfo_all_blocks=1 00:20:58.490 --rc geninfo_unexecuted_blocks=1 00:20:58.490 00:20:58.490 ' 00:20:58.490 09:35:22 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:58.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.490 --rc genhtml_branch_coverage=1 00:20:58.490 --rc genhtml_function_coverage=1 00:20:58.490 --rc genhtml_legend=1 00:20:58.490 --rc geninfo_all_blocks=1 00:20:58.490 --rc geninfo_unexecuted_blocks=1 00:20:58.490 00:20:58.490 ' 00:20:58.490 09:35:22 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:58.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.490 --rc genhtml_branch_coverage=1 00:20:58.491 --rc genhtml_function_coverage=1 00:20:58.491 --rc genhtml_legend=1 00:20:58.491 --rc geninfo_all_blocks=1 00:20:58.491 --rc geninfo_unexecuted_blocks=1 00:20:58.491 00:20:58.491 ' 00:20:58.491 09:35:22 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:20:58.491 09:35:22 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:58.491 09:35:22 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:20:58.491 09:35:22 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:58.491 09:35:22 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:58.491 09:35:22 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:58.491 09:35:22 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:58.491 09:35:22 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:58.491 09:35:22 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:58.491 09:35:22 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:58.491 09:35:22 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:58.491 09:35:22 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:58.491 09:35:22 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:58.491 09:35:22 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:20:58.491 09:35:22 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5989d9e2-d339-420e-a2f4-bd87604f111f 00:20:58.491 09:35:22 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:58.491 09:35:22 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:58.491 09:35:22 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:58.491 09:35:22 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:58.491 09:35:22 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:58.491 09:35:22 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:20:58.491 09:35:22 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:58.491 09:35:22 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:58.491 09:35:22 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:58.491 09:35:22 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.491 09:35:22 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.491 09:35:22 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.491 09:35:22 keyring_file -- paths/export.sh@5 -- # export PATH 00:20:58.491 09:35:22 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.491 09:35:22 keyring_file -- nvmf/common.sh@51 -- # : 0 00:20:58.491 09:35:22 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:58.491 09:35:22 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:58.491 09:35:22 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:58.491 09:35:22 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:58.491 09:35:22 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:58.491 09:35:22 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:58.491 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:58.491 09:35:22 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:58.491 09:35:22 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:58.491 09:35:22 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:58.491 09:35:22 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:20:58.491 09:35:22 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:20:58.491 09:35:22 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:20:58.491 09:35:22 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:20:58.491 09:35:22 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:20:58.491 09:35:22 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:20:58.491 09:35:22 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:20:58.491 09:35:22 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:20:58.491 09:35:22 keyring_file -- keyring/common.sh@17 -- # name=key0 00:20:58.491 09:35:22 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:20:58.491 09:35:22 keyring_file -- keyring/common.sh@17 -- # digest=0 00:20:58.491 09:35:22 keyring_file -- keyring/common.sh@18 -- # mktemp 00:20:58.491 09:35:22 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.XsudFMMgoj 00:20:58.491 09:35:22 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:20:58.491 09:35:22 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:20:58.491 09:35:22 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:20:58.491 09:35:22 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:20:58.491 09:35:22 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:20:58.491 09:35:22 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:20:58.491 09:35:22 keyring_file -- nvmf/common.sh@731 -- # python - 00:20:58.750 09:35:22 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.XsudFMMgoj 00:20:58.750 09:35:22 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.XsudFMMgoj 00:20:58.750 09:35:22 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.XsudFMMgoj 00:20:58.750 09:35:22 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:20:58.750 09:35:22 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:20:58.750 09:35:22 keyring_file -- keyring/common.sh@17 -- # name=key1 00:20:58.750 09:35:22 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:20:58.750 09:35:22 keyring_file -- keyring/common.sh@17 -- # digest=0 00:20:58.750 09:35:22 keyring_file -- keyring/common.sh@18 -- # mktemp 00:20:58.750 09:35:22 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.3ew2KEUr7W 00:20:58.750 09:35:22 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:20:58.750 09:35:22 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:20:58.750 09:35:22 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:20:58.750 09:35:22 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:20:58.750 09:35:22 keyring_file -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:20:58.750 09:35:22 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:20:58.750 09:35:22 keyring_file -- nvmf/common.sh@731 -- # python - 00:20:58.750 09:35:22 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.3ew2KEUr7W 00:20:58.750 09:35:22 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.3ew2KEUr7W 00:20:58.750 09:35:22 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.3ew2KEUr7W 00:20:58.750 09:35:22 keyring_file -- keyring/file.sh@30 -- # tgtpid=84601 00:20:58.750 09:35:22 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:58.750 09:35:22 keyring_file -- keyring/file.sh@32 -- # waitforlisten 84601 00:20:58.750 09:35:22 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 84601 ']' 00:20:58.750 09:35:22 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:58.750 09:35:22 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:58.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:58.750 09:35:22 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:58.750 09:35:22 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:58.750 09:35:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:58.750 [2024-10-16 09:35:23.047333] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:20:58.750 [2024-10-16 09:35:23.047430] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84601 ] 00:20:59.009 [2024-10-16 09:35:23.181004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.009 [2024-10-16 09:35:23.233843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:59.009 [2024-10-16 09:35:23.312010] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:59.268 09:35:23 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:59.268 09:35:23 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:20:59.268 09:35:23 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:20:59.268 09:35:23 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.268 09:35:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:59.268 [2024-10-16 09:35:23.528609] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:59.268 null0 00:20:59.268 [2024-10-16 09:35:23.560614] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:59.268 [2024-10-16 09:35:23.560819] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:20:59.268 09:35:23 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.268 09:35:23 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:20:59.268 09:35:23 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:20:59.268 09:35:23 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:20:59.268 09:35:23 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:59.268 09:35:23 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:59.268 09:35:23 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:59.268 09:35:23 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:59.268 09:35:23 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:20:59.268 09:35:23 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.268 09:35:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:59.268 [2024-10-16 09:35:23.588524] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:20:59.268 request: 00:20:59.268 { 00:20:59.268 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:20:59.268 "secure_channel": false, 00:20:59.268 "listen_address": { 00:20:59.268 "trtype": "tcp", 00:20:59.268 "traddr": "127.0.0.1", 00:20:59.268 "trsvcid": "4420" 00:20:59.268 }, 00:20:59.268 "method": "nvmf_subsystem_add_listener", 00:20:59.268 "req_id": 1 00:20:59.268 } 00:20:59.268 Got JSON-RPC error response 00:20:59.268 response: 00:20:59.268 { 00:20:59.268 "code": -32602, 00:20:59.268 "message": "Invalid parameters" 00:20:59.268 } 00:20:59.268 09:35:23 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:59.268 09:35:23 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:20:59.268 09:35:23 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:59.268 09:35:23 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:59.268 09:35:23 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:59.268 09:35:23 keyring_file -- keyring/file.sh@47 -- # bperfpid=84612 00:20:59.268 09:35:23 keyring_file -- keyring/file.sh@49 -- # waitforlisten 84612 /var/tmp/bperf.sock 00:20:59.268 09:35:23 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 84612 ']' 00:20:59.268 09:35:23 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:20:59.268 09:35:23 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:59.268 09:35:23 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:59.268 09:35:23 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:59.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:59.268 09:35:23 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:59.268 09:35:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:59.268 [2024-10-16 09:35:23.654042] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:20:59.268 [2024-10-16 09:35:23.654135] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84612 ] 00:20:59.527 [2024-10-16 09:35:23.795386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.527 [2024-10-16 09:35:23.849284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:59.527 [2024-10-16 09:35:23.906777] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:59.785 09:35:23 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:59.785 09:35:23 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:20:59.785 09:35:23 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.XsudFMMgoj 00:20:59.785 09:35:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.XsudFMMgoj 00:21:00.044 09:35:24 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.3ew2KEUr7W 00:21:00.044 09:35:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.3ew2KEUr7W 00:21:00.303 09:35:24 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:21:00.303 09:35:24 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:21:00.303 09:35:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:00.303 09:35:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:00.303 09:35:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:00.561 09:35:24 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.XsudFMMgoj == \/\t\m\p\/\t\m\p\.\X\s\u\d\F\M\M\g\o\j ]] 00:21:00.561 09:35:24 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:21:00.561 09:35:24 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:21:00.561 09:35:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:00.561 09:35:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:00.562 09:35:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:00.820 09:35:25 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.3ew2KEUr7W == \/\t\m\p\/\t\m\p\.\3\e\w\2\K\E\U\r\7\W ]] 00:21:00.820 09:35:25 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:21:00.820 09:35:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:00.820 09:35:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:00.820 09:35:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:00.820 09:35:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:00.820 09:35:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:01.079 09:35:25 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:21:01.079 09:35:25 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:21:01.079 09:35:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:01.079 09:35:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:01.079 09:35:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:01.079 09:35:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:01.079 09:35:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:01.337 09:35:25 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:21:01.337 09:35:25 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:01.337 09:35:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:01.597 [2024-10-16 09:35:25.883693] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:01.597 nvme0n1 00:21:01.597 09:35:25 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:21:01.597 09:35:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:01.597 09:35:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:01.597 09:35:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:01.597 09:35:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:01.597 09:35:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:01.856 09:35:26 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:21:01.856 09:35:26 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:21:01.856 09:35:26 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:01.856 09:35:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:01.856 09:35:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:01.856 09:35:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:01.857 09:35:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:02.115 09:35:26 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:21:02.115 09:35:26 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:02.374 Running I/O for 1 seconds... 00:21:03.310 11754.00 IOPS, 45.91 MiB/s 00:21:03.310 Latency(us) 00:21:03.310 [2024-10-16T09:35:27.714Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:03.310 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:21:03.310 nvme0n1 : 1.01 11810.76 46.14 0.00 0.00 10809.64 5928.03 20971.52 00:21:03.310 [2024-10-16T09:35:27.714Z] =================================================================================================================== 00:21:03.310 [2024-10-16T09:35:27.714Z] Total : 11810.76 46.14 0.00 0.00 10809.64 5928.03 20971.52 00:21:03.310 { 00:21:03.310 "results": [ 00:21:03.310 { 00:21:03.310 "job": "nvme0n1", 00:21:03.310 "core_mask": "0x2", 00:21:03.310 "workload": "randrw", 00:21:03.310 "percentage": 50, 00:21:03.310 "status": "finished", 00:21:03.310 "queue_depth": 128, 00:21:03.310 "io_size": 4096, 00:21:03.310 "runtime": 1.006032, 00:21:03.310 "iops": 11810.757510695485, 00:21:03.310 "mibps": 46.13577152615424, 00:21:03.310 "io_failed": 0, 00:21:03.310 "io_timeout": 0, 00:21:03.310 "avg_latency_us": 10809.638017474867, 00:21:03.310 "min_latency_us": 5928.029090909091, 00:21:03.310 "max_latency_us": 20971.52 00:21:03.310 } 00:21:03.310 ], 00:21:03.310 "core_count": 1 00:21:03.310 } 00:21:03.310 09:35:27 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:03.310 09:35:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:03.568 09:35:27 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:21:03.568 09:35:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:03.568 09:35:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:03.568 09:35:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:03.568 09:35:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:03.568 09:35:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:03.827 09:35:28 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:21:03.827 09:35:28 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:21:03.827 09:35:28 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:03.827 09:35:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:03.827 09:35:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:03.827 09:35:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:03.827 09:35:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:04.086 09:35:28 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:21:04.086 09:35:28 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:04.086 09:35:28 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:21:04.086 09:35:28 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:04.086 09:35:28 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:21:04.086 09:35:28 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:04.086 09:35:28 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:21:04.086 09:35:28 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:04.086 09:35:28 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:04.086 09:35:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:04.345 [2024-10-16 09:35:28.597092] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:04.345 [2024-10-16 09:35:28.597698] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205bd90 (107): Transport endpoint is not connected 00:21:04.345 [2024-10-16 09:35:28.598678] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205bd90 (9): Bad file descriptor 00:21:04.345 [2024-10-16 09:35:28.599674] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:04.345 [2024-10-16 09:35:28.599699] nvme.c: 721:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:21:04.345 [2024-10-16 09:35:28.599710] nvme.c: 897:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:21:04.345 [2024-10-16 09:35:28.599724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:04.345 request: 00:21:04.345 { 00:21:04.345 "name": "nvme0", 00:21:04.345 "trtype": "tcp", 00:21:04.345 "traddr": "127.0.0.1", 00:21:04.345 "adrfam": "ipv4", 00:21:04.345 "trsvcid": "4420", 00:21:04.345 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:04.345 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:04.345 "prchk_reftag": false, 00:21:04.345 "prchk_guard": false, 00:21:04.345 "hdgst": false, 00:21:04.345 "ddgst": false, 00:21:04.345 "psk": "key1", 00:21:04.345 "allow_unrecognized_csi": false, 00:21:04.345 "method": "bdev_nvme_attach_controller", 00:21:04.345 "req_id": 1 00:21:04.345 } 00:21:04.345 Got JSON-RPC error response 00:21:04.345 response: 00:21:04.345 { 00:21:04.345 "code": -5, 00:21:04.345 "message": "Input/output error" 00:21:04.345 } 00:21:04.345 09:35:28 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:21:04.345 09:35:28 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:04.345 09:35:28 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:04.345 09:35:28 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:04.345 09:35:28 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:21:04.345 09:35:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:04.345 09:35:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:04.345 09:35:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:04.345 09:35:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:04.345 09:35:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:04.604 09:35:28 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:21:04.604 09:35:28 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:21:04.604 09:35:28 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:04.604 09:35:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:04.604 09:35:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:04.604 09:35:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:04.604 09:35:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:04.862 09:35:29 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:21:04.862 09:35:29 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:21:04.863 09:35:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:05.121 09:35:29 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:21:05.121 09:35:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:21:05.386 09:35:29 keyring_file -- keyring/file.sh@78 -- # jq length 00:21:05.386 09:35:29 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:21:05.386 09:35:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:05.646 09:35:29 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:21:05.646 09:35:29 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.XsudFMMgoj 00:21:05.646 09:35:29 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.XsudFMMgoj 00:21:05.646 09:35:29 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:21:05.646 09:35:29 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.XsudFMMgoj 00:21:05.646 09:35:29 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:21:05.646 09:35:29 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:05.646 09:35:29 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:21:05.646 09:35:29 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:05.646 09:35:29 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.XsudFMMgoj 00:21:05.646 09:35:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.XsudFMMgoj 00:21:05.906 [2024-10-16 09:35:30.177829] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.XsudFMMgoj': 0100660 00:21:05.906 [2024-10-16 09:35:30.177922] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:05.906 request: 00:21:05.906 { 00:21:05.906 "name": "key0", 00:21:05.906 "path": "/tmp/tmp.XsudFMMgoj", 00:21:05.906 "method": "keyring_file_add_key", 00:21:05.906 "req_id": 1 00:21:05.906 } 00:21:05.906 Got JSON-RPC error response 00:21:05.906 response: 00:21:05.906 { 00:21:05.906 "code": -1, 00:21:05.906 "message": "Operation not permitted" 00:21:05.906 } 00:21:05.906 09:35:30 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:21:05.906 09:35:30 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:05.906 09:35:30 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:05.906 09:35:30 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:05.906 09:35:30 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.XsudFMMgoj 00:21:05.906 09:35:30 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.XsudFMMgoj 00:21:05.906 09:35:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.XsudFMMgoj 00:21:06.163 09:35:30 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.XsudFMMgoj 00:21:06.163 09:35:30 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:21:06.163 09:35:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:06.163 09:35:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:06.163 09:35:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:06.163 09:35:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:06.163 09:35:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:06.421 09:35:30 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:21:06.421 09:35:30 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:06.421 09:35:30 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:21:06.421 09:35:30 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:06.421 09:35:30 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:21:06.421 09:35:30 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:06.421 09:35:30 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:21:06.421 09:35:30 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:06.421 09:35:30 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:06.421 09:35:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:06.680 [2024-10-16 09:35:30.962023] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.XsudFMMgoj': No such file or directory 00:21:06.680 [2024-10-16 09:35:30.962125] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:21:06.680 [2024-10-16 09:35:30.962152] nvme.c: 695:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:21:06.680 [2024-10-16 09:35:30.962162] nvme.c: 897:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:21:06.680 [2024-10-16 09:35:30.962174] nvme.c: 844:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:06.680 [2024-10-16 09:35:30.962183] bdev_nvme.c:6438:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:21:06.680 request: 00:21:06.680 { 00:21:06.680 "name": "nvme0", 00:21:06.680 "trtype": "tcp", 00:21:06.680 "traddr": "127.0.0.1", 00:21:06.680 "adrfam": "ipv4", 00:21:06.680 "trsvcid": "4420", 00:21:06.680 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:06.680 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:06.680 "prchk_reftag": false, 00:21:06.680 "prchk_guard": false, 00:21:06.680 "hdgst": false, 00:21:06.680 "ddgst": false, 00:21:06.680 "psk": "key0", 00:21:06.680 "allow_unrecognized_csi": false, 00:21:06.680 "method": "bdev_nvme_attach_controller", 00:21:06.680 "req_id": 1 00:21:06.680 } 00:21:06.680 Got JSON-RPC error response 00:21:06.680 response: 00:21:06.680 { 00:21:06.680 "code": -19, 00:21:06.680 "message": "No such device" 00:21:06.680 } 00:21:06.680 09:35:30 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:21:06.680 09:35:30 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:06.680 09:35:30 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:06.680 09:35:30 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:06.680 09:35:30 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:21:06.680 09:35:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:06.939 09:35:31 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:06.939 09:35:31 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:06.939 09:35:31 keyring_file -- keyring/common.sh@17 -- # name=key0 00:21:06.939 09:35:31 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:06.939 09:35:31 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:06.939 09:35:31 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:06.939 09:35:31 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.X2CAIHbdk4 00:21:06.939 09:35:31 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:06.939 09:35:31 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:06.939 09:35:31 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:21:06.939 09:35:31 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:21:06.939 09:35:31 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:21:06.939 09:35:31 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:21:06.939 09:35:31 keyring_file -- nvmf/common.sh@731 -- # python - 00:21:06.939 09:35:31 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.X2CAIHbdk4 00:21:06.939 09:35:31 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.X2CAIHbdk4 00:21:06.939 09:35:31 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.X2CAIHbdk4 00:21:06.939 09:35:31 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.X2CAIHbdk4 00:21:06.939 09:35:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.X2CAIHbdk4 00:21:07.197 09:35:31 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:07.197 09:35:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:07.456 nvme0n1 00:21:07.456 09:35:31 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:21:07.456 09:35:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:07.456 09:35:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:07.456 09:35:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:07.456 09:35:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:07.456 09:35:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:07.715 09:35:32 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:21:07.715 09:35:32 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:21:07.715 09:35:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:07.974 09:35:32 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:21:07.974 09:35:32 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:21:07.974 09:35:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:07.974 09:35:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:07.974 09:35:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:08.232 09:35:32 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:21:08.232 09:35:32 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:21:08.232 09:35:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:08.232 09:35:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:08.232 09:35:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:08.232 09:35:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:08.232 09:35:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:08.490 09:35:32 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:21:08.490 09:35:32 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:08.490 09:35:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:08.749 09:35:33 keyring_file -- keyring/file.sh@105 -- # jq length 00:21:08.749 09:35:33 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:21:08.749 09:35:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:09.007 09:35:33 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:21:09.007 09:35:33 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.X2CAIHbdk4 00:21:09.007 09:35:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.X2CAIHbdk4 00:21:09.265 09:35:33 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.3ew2KEUr7W 00:21:09.265 09:35:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.3ew2KEUr7W 00:21:09.524 09:35:33 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:09.524 09:35:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:09.783 nvme0n1 00:21:09.783 09:35:34 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:21:09.783 09:35:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:21:10.041 09:35:34 keyring_file -- keyring/file.sh@113 -- # config='{ 00:21:10.041 "subsystems": [ 00:21:10.041 { 00:21:10.041 "subsystem": "keyring", 00:21:10.041 "config": [ 00:21:10.041 { 00:21:10.041 "method": "keyring_file_add_key", 00:21:10.041 "params": { 00:21:10.041 "name": "key0", 00:21:10.041 "path": "/tmp/tmp.X2CAIHbdk4" 00:21:10.041 } 00:21:10.041 }, 00:21:10.041 { 00:21:10.041 "method": "keyring_file_add_key", 00:21:10.041 "params": { 00:21:10.041 "name": "key1", 00:21:10.041 "path": "/tmp/tmp.3ew2KEUr7W" 00:21:10.041 } 00:21:10.041 } 00:21:10.041 ] 00:21:10.041 }, 00:21:10.041 { 00:21:10.041 "subsystem": "iobuf", 00:21:10.041 "config": [ 00:21:10.041 { 00:21:10.041 "method": "iobuf_set_options", 00:21:10.041 "params": { 00:21:10.041 "small_pool_count": 8192, 00:21:10.041 "large_pool_count": 1024, 00:21:10.041 "small_bufsize": 8192, 00:21:10.041 "large_bufsize": 135168 00:21:10.041 } 00:21:10.041 } 00:21:10.041 ] 00:21:10.041 }, 00:21:10.041 { 00:21:10.041 "subsystem": "sock", 00:21:10.041 "config": [ 00:21:10.041 { 00:21:10.041 "method": "sock_set_default_impl", 00:21:10.041 "params": { 00:21:10.041 "impl_name": "uring" 00:21:10.041 } 00:21:10.041 }, 00:21:10.041 { 00:21:10.041 "method": "sock_impl_set_options", 00:21:10.041 "params": { 00:21:10.041 "impl_name": "ssl", 00:21:10.041 "recv_buf_size": 4096, 00:21:10.041 "send_buf_size": 4096, 00:21:10.041 "enable_recv_pipe": true, 00:21:10.041 "enable_quickack": false, 00:21:10.041 "enable_placement_id": 0, 00:21:10.041 "enable_zerocopy_send_server": true, 00:21:10.041 "enable_zerocopy_send_client": false, 00:21:10.041 "zerocopy_threshold": 0, 00:21:10.041 "tls_version": 0, 00:21:10.041 "enable_ktls": false 00:21:10.041 } 00:21:10.041 }, 00:21:10.041 { 00:21:10.041 "method": "sock_impl_set_options", 00:21:10.041 "params": { 00:21:10.042 "impl_name": "posix", 00:21:10.042 "recv_buf_size": 2097152, 00:21:10.042 "send_buf_size": 2097152, 00:21:10.042 "enable_recv_pipe": true, 00:21:10.042 "enable_quickack": false, 00:21:10.042 "enable_placement_id": 0, 00:21:10.042 "enable_zerocopy_send_server": true, 00:21:10.042 "enable_zerocopy_send_client": false, 00:21:10.042 "zerocopy_threshold": 0, 00:21:10.042 "tls_version": 0, 00:21:10.042 "enable_ktls": false 00:21:10.042 } 00:21:10.042 }, 00:21:10.042 { 00:21:10.042 "method": "sock_impl_set_options", 00:21:10.042 "params": { 00:21:10.042 "impl_name": "uring", 00:21:10.042 "recv_buf_size": 2097152, 00:21:10.042 "send_buf_size": 2097152, 00:21:10.042 "enable_recv_pipe": true, 00:21:10.042 "enable_quickack": false, 00:21:10.042 "enable_placement_id": 0, 00:21:10.042 "enable_zerocopy_send_server": false, 00:21:10.042 "enable_zerocopy_send_client": false, 00:21:10.042 "zerocopy_threshold": 0, 00:21:10.042 "tls_version": 0, 00:21:10.042 "enable_ktls": false 00:21:10.042 } 00:21:10.042 } 00:21:10.042 ] 00:21:10.042 }, 00:21:10.042 { 00:21:10.042 "subsystem": "vmd", 00:21:10.042 "config": [] 00:21:10.042 }, 00:21:10.042 { 00:21:10.042 "subsystem": "accel", 00:21:10.042 "config": [ 00:21:10.042 { 00:21:10.042 "method": "accel_set_options", 00:21:10.042 "params": { 00:21:10.042 "small_cache_size": 128, 00:21:10.042 "large_cache_size": 16, 00:21:10.042 "task_count": 2048, 00:21:10.042 "sequence_count": 2048, 00:21:10.042 "buf_count": 2048 00:21:10.042 } 00:21:10.042 } 00:21:10.042 ] 00:21:10.042 }, 00:21:10.042 { 00:21:10.042 "subsystem": "bdev", 00:21:10.042 "config": [ 00:21:10.042 { 00:21:10.042 "method": "bdev_set_options", 00:21:10.042 "params": { 00:21:10.042 "bdev_io_pool_size": 65535, 00:21:10.042 "bdev_io_cache_size": 256, 00:21:10.042 "bdev_auto_examine": true, 00:21:10.042 "iobuf_small_cache_size": 128, 00:21:10.042 "iobuf_large_cache_size": 16 00:21:10.042 } 00:21:10.042 }, 00:21:10.042 { 00:21:10.042 "method": "bdev_raid_set_options", 00:21:10.042 "params": { 00:21:10.042 "process_window_size_kb": 1024, 00:21:10.042 "process_max_bandwidth_mb_sec": 0 00:21:10.042 } 00:21:10.042 }, 00:21:10.042 { 00:21:10.042 "method": "bdev_iscsi_set_options", 00:21:10.042 "params": { 00:21:10.042 "timeout_sec": 30 00:21:10.042 } 00:21:10.042 }, 00:21:10.042 { 00:21:10.042 "method": "bdev_nvme_set_options", 00:21:10.042 "params": { 00:21:10.042 "action_on_timeout": "none", 00:21:10.042 "timeout_us": 0, 00:21:10.042 "timeout_admin_us": 0, 00:21:10.042 "keep_alive_timeout_ms": 10000, 00:21:10.042 "arbitration_burst": 0, 00:21:10.042 "low_priority_weight": 0, 00:21:10.042 "medium_priority_weight": 0, 00:21:10.042 "high_priority_weight": 0, 00:21:10.042 "nvme_adminq_poll_period_us": 10000, 00:21:10.042 "nvme_ioq_poll_period_us": 0, 00:21:10.042 "io_queue_requests": 512, 00:21:10.042 "delay_cmd_submit": true, 00:21:10.042 "transport_retry_count": 4, 00:21:10.042 "bdev_retry_count": 3, 00:21:10.042 "transport_ack_timeout": 0, 00:21:10.042 "ctrlr_loss_timeout_sec": 0, 00:21:10.042 "reconnect_delay_sec": 0, 00:21:10.042 "fast_io_fail_timeout_sec": 0, 00:21:10.042 "disable_auto_failback": false, 00:21:10.042 "generate_uuids": false, 00:21:10.042 "transport_tos": 0, 00:21:10.042 "nvme_error_stat": false, 00:21:10.042 "rdma_srq_size": 0, 00:21:10.042 "io_path_stat": false, 00:21:10.042 "allow_accel_sequence": false, 00:21:10.042 "rdma_max_cq_size": 0, 00:21:10.042 "rdma_cm_event_timeout_ms": 0, 00:21:10.042 "dhchap_digests": [ 00:21:10.042 "sha256", 00:21:10.042 "sha384", 00:21:10.042 "sha512" 00:21:10.042 ], 00:21:10.042 "dhchap_dhgroups": [ 00:21:10.042 "null", 00:21:10.042 "ffdhe2048", 00:21:10.042 "ffdhe3072", 00:21:10.042 "ffdhe4096", 00:21:10.042 "ffdhe6144", 00:21:10.042 "ffdhe8192" 00:21:10.042 ] 00:21:10.042 } 00:21:10.042 }, 00:21:10.042 { 00:21:10.042 "method": "bdev_nvme_attach_controller", 00:21:10.042 "params": { 00:21:10.042 "name": "nvme0", 00:21:10.042 "trtype": "TCP", 00:21:10.042 "adrfam": "IPv4", 00:21:10.042 "traddr": "127.0.0.1", 00:21:10.042 "trsvcid": "4420", 00:21:10.042 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:10.042 "prchk_reftag": false, 00:21:10.042 "prchk_guard": false, 00:21:10.042 "ctrlr_loss_timeout_sec": 0, 00:21:10.042 "reconnect_delay_sec": 0, 00:21:10.042 "fast_io_fail_timeout_sec": 0, 00:21:10.042 "psk": "key0", 00:21:10.042 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:10.042 "hdgst": false, 00:21:10.042 "ddgst": false, 00:21:10.042 "multipath": "multipath" 00:21:10.042 } 00:21:10.042 }, 00:21:10.042 { 00:21:10.042 "method": "bdev_nvme_set_hotplug", 00:21:10.042 "params": { 00:21:10.042 "period_us": 100000, 00:21:10.042 "enable": false 00:21:10.042 } 00:21:10.042 }, 00:21:10.042 { 00:21:10.042 "method": "bdev_wait_for_examine" 00:21:10.042 } 00:21:10.042 ] 00:21:10.042 }, 00:21:10.042 { 00:21:10.042 "subsystem": "nbd", 00:21:10.042 "config": [] 00:21:10.042 } 00:21:10.042 ] 00:21:10.042 }' 00:21:10.042 09:35:34 keyring_file -- keyring/file.sh@115 -- # killprocess 84612 00:21:10.042 09:35:34 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 84612 ']' 00:21:10.042 09:35:34 keyring_file -- common/autotest_common.sh@954 -- # kill -0 84612 00:21:10.042 09:35:34 keyring_file -- common/autotest_common.sh@955 -- # uname 00:21:10.042 09:35:34 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:10.042 09:35:34 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84612 00:21:10.042 09:35:34 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:10.042 killing process with pid 84612 00:21:10.042 Received shutdown signal, test time was about 1.000000 seconds 00:21:10.042 00:21:10.042 Latency(us) 00:21:10.042 [2024-10-16T09:35:34.446Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.042 [2024-10-16T09:35:34.446Z] =================================================================================================================== 00:21:10.042 [2024-10-16T09:35:34.446Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:10.042 09:35:34 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:10.042 09:35:34 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84612' 00:21:10.042 09:35:34 keyring_file -- common/autotest_common.sh@969 -- # kill 84612 00:21:10.042 09:35:34 keyring_file -- common/autotest_common.sh@974 -- # wait 84612 00:21:10.301 09:35:34 keyring_file -- keyring/file.sh@118 -- # bperfpid=84850 00:21:10.301 09:35:34 keyring_file -- keyring/file.sh@120 -- # waitforlisten 84850 /var/tmp/bperf.sock 00:21:10.301 09:35:34 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 84850 ']' 00:21:10.301 09:35:34 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:21:10.301 09:35:34 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:21:10.301 "subsystems": [ 00:21:10.301 { 00:21:10.301 "subsystem": "keyring", 00:21:10.301 "config": [ 00:21:10.301 { 00:21:10.301 "method": "keyring_file_add_key", 00:21:10.301 "params": { 00:21:10.301 "name": "key0", 00:21:10.301 "path": "/tmp/tmp.X2CAIHbdk4" 00:21:10.301 } 00:21:10.301 }, 00:21:10.301 { 00:21:10.301 "method": "keyring_file_add_key", 00:21:10.301 "params": { 00:21:10.301 "name": "key1", 00:21:10.301 "path": "/tmp/tmp.3ew2KEUr7W" 00:21:10.301 } 00:21:10.301 } 00:21:10.301 ] 00:21:10.301 }, 00:21:10.301 { 00:21:10.301 "subsystem": "iobuf", 00:21:10.301 "config": [ 00:21:10.301 { 00:21:10.301 "method": "iobuf_set_options", 00:21:10.301 "params": { 00:21:10.301 "small_pool_count": 8192, 00:21:10.301 "large_pool_count": 1024, 00:21:10.301 "small_bufsize": 8192, 00:21:10.301 "large_bufsize": 135168 00:21:10.301 } 00:21:10.301 } 00:21:10.301 ] 00:21:10.301 }, 00:21:10.301 { 00:21:10.301 "subsystem": "sock", 00:21:10.301 "config": [ 00:21:10.301 { 00:21:10.301 "method": "sock_set_default_impl", 00:21:10.301 "params": { 00:21:10.301 "impl_name": "uring" 00:21:10.301 } 00:21:10.301 }, 00:21:10.301 { 00:21:10.301 "method": "sock_impl_set_options", 00:21:10.301 "params": { 00:21:10.301 "impl_name": "ssl", 00:21:10.301 "recv_buf_size": 4096, 00:21:10.301 "send_buf_size": 4096, 00:21:10.301 "enable_recv_pipe": true, 00:21:10.301 "enable_quickack": false, 00:21:10.301 "enable_placement_id": 0, 00:21:10.301 "enable_zerocopy_send_server": true, 00:21:10.301 "enable_zerocopy_send_client": false, 00:21:10.301 "zerocopy_threshold": 0, 00:21:10.301 "tls_version": 0, 00:21:10.301 "enable_ktls": false 00:21:10.301 } 00:21:10.301 }, 00:21:10.302 { 00:21:10.302 "method": "sock_impl_set_options", 00:21:10.302 "params": { 00:21:10.302 "impl_name": "posix", 00:21:10.302 "recv_buf_size": 2097152, 00:21:10.302 "send_buf_size": 2097152, 00:21:10.302 "enable_recv_pipe": true, 00:21:10.302 "enable_quickack": false, 00:21:10.302 "enable_placement_id": 0, 00:21:10.302 "enable_zerocopy_send_server": true, 00:21:10.302 "enable_zerocopy_send_client": false, 00:21:10.302 "zerocopy_threshold": 0, 00:21:10.302 "tls_version": 0, 00:21:10.302 "enable_ktls": false 00:21:10.302 } 00:21:10.302 }, 00:21:10.302 { 00:21:10.302 "method": "sock_impl_set_options", 00:21:10.302 "params": { 00:21:10.302 "impl_name": "uring", 00:21:10.302 "recv_buf_size": 2097152, 00:21:10.302 "send_buf_size": 2097152, 00:21:10.302 "enable_recv_pipe": true, 00:21:10.302 "enable_quickack": false, 00:21:10.302 "enable_placement_id": 0, 00:21:10.302 "enable_zerocopy_send_server": false, 00:21:10.302 "enable_zerocopy_send_client": false, 00:21:10.302 "zerocopy_threshold": 0, 00:21:10.302 "tls_version": 0, 00:21:10.302 "enable_ktls": false 00:21:10.302 } 00:21:10.302 } 00:21:10.302 ] 00:21:10.302 }, 00:21:10.302 { 00:21:10.302 "subsystem": "vmd", 00:21:10.302 "config": [] 00:21:10.302 }, 00:21:10.302 { 00:21:10.302 "subsystem": "accel", 00:21:10.302 "config": [ 00:21:10.302 { 00:21:10.302 "method": "accel_set_options", 00:21:10.302 "params": { 00:21:10.302 "small_cache_size": 128, 00:21:10.302 "large_cache_size": 16, 00:21:10.302 "task_count": 2048, 00:21:10.302 "sequence_count": 2048, 00:21:10.302 "buf_count": 2048 00:21:10.302 } 00:21:10.302 } 00:21:10.302 ] 00:21:10.302 }, 00:21:10.302 { 00:21:10.302 "subsystem": "bdev", 00:21:10.302 "config": [ 00:21:10.302 { 00:21:10.302 "method": "bdev_set_options", 00:21:10.302 "params": { 00:21:10.302 "bdev_io_pool_size": 65535, 00:21:10.302 "bdev_io_cache_size": 256, 00:21:10.302 "bdev_auto_examine": true, 00:21:10.302 "iobuf_small_cache_size": 128, 00:21:10.302 "iobuf_large_cache_size": 16 00:21:10.302 } 00:21:10.302 }, 00:21:10.302 { 00:21:10.302 "method": "bdev_raid_set_options", 00:21:10.302 "params": { 00:21:10.302 "process_window_size_kb": 1024, 00:21:10.302 "process_max_bandwidth_mb_sec": 0 00:21:10.302 } 00:21:10.302 }, 00:21:10.302 { 00:21:10.302 "method": "bdev_iscsi_set_options", 00:21:10.302 "params": { 00:21:10.302 "timeout_sec": 30 00:21:10.302 } 00:21:10.302 }, 00:21:10.302 { 00:21:10.302 "method": "bdev_nvme_set_options", 00:21:10.302 "params": { 00:21:10.302 "action_on_timeout": "none", 00:21:10.302 "timeout_us": 0, 00:21:10.302 "timeout_admin_us": 0, 00:21:10.302 "keep_alive_timeout_ms": 10000, 00:21:10.302 "arbitration_burst": 0, 00:21:10.302 "low_priority_weight": 0, 00:21:10.302 "medium_priority_weight": 0, 00:21:10.302 "high_priority_weight": 0, 00:21:10.302 "nvme_adminq_poll_period_us": 10000, 00:21:10.302 "nvme_ioq_poll_period_us": 0, 00:21:10.302 "io_queue_requests": 512, 00:21:10.302 "delay_cmd_submit": true, 00:21:10.302 "transport_retry_count": 4, 00:21:10.302 "bdev_retry_count": 3, 00:21:10.302 "transport_ack_timeout": 0, 00:21:10.302 "ctrlr_loss_timeout_sec": 0, 00:21:10.302 "reconnect_delay_sec": 0, 00:21:10.302 "fast_io_fail_timeout_sec": 0, 00:21:10.302 "disable_auto_failback": false, 00:21:10.302 "generate_uuids": false, 00:21:10.302 "transport_tos": 0, 00:21:10.302 "nvme_error_stat": false, 00:21:10.302 "rdma_srq_size": 0, 00:21:10.302 "io_path_stat": false, 00:21:10.302 "allow_accel_sequence": false, 00:21:10.302 "rdma_max_cq_size": 0, 00:21:10.302 "rdma_cm_event_timeout_ms": 0, 00:21:10.302 "dhchap_digests": [ 00:21:10.302 "sha256", 00:21:10.302 "sha384", 00:21:10.302 "sha512" 00:21:10.302 ], 00:21:10.302 "dhchap_dhgroups": [ 00:21:10.302 "null", 00:21:10.302 "ffdhe2048", 00:21:10.302 "ffdhe3072", 00:21:10.302 "ffdhe4096", 00:21:10.302 "ffdhe6144", 00:21:10.302 "ffdhe8192" 00:21:10.302 ] 00:21:10.302 } 00:21:10.302 }, 00:21:10.302 { 00:21:10.302 "method": "bdev_nvme_attach_controller", 00:21:10.302 "params": { 00:21:10.302 "name": "nvme0", 00:21:10.302 "trtype": "TCP", 00:21:10.302 "adrfam": "IPv4", 00:21:10.302 "traddr": "127.0.0.1", 00:21:10.302 "trsvcid": "4420", 00:21:10.302 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:10.302 "prchk_reftag": false, 00:21:10.302 "prchk_guard": false, 00:21:10.302 "ctrlr_loss_timeout_sec": 0, 00:21:10.302 "reconnect_delay_sec": 0, 00:21:10.302 "fast_io_fail_timeout_sec": 0, 00:21:10.302 "psk": "key0", 00:21:10.302 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:10.302 "hdgst": false, 00:21:10.302 "ddgst": false, 00:21:10.302 "multipath": "multipath" 00:21:10.302 } 00:21:10.302 }, 00:21:10.302 { 00:21:10.302 "method": "bdev_nvme_set_hotplug", 00:21:10.302 "params": { 00:21:10.302 "period_us": 100000, 00:21:10.302 "enable": false 00:21:10.302 } 00:21:10.302 }, 00:21:10.302 { 00:21:10.302 "method": "bdev_wait_for_examine" 00:21:10.302 } 00:21:10.302 ] 00:21:10.302 }, 00:21:10.302 { 00:21:10.302 "subsystem": "nbd", 00:21:10.302 "config": [] 00:21:10.302 } 00:21:10.302 ] 00:21:10.302 }' 00:21:10.302 09:35:34 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:10.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:10.302 09:35:34 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:10.302 09:35:34 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:10.302 09:35:34 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:10.302 09:35:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:10.561 [2024-10-16 09:35:34.726645] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:21:10.561 [2024-10-16 09:35:34.726732] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84850 ] 00:21:10.561 [2024-10-16 09:35:34.858010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.561 [2024-10-16 09:35:34.935769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:10.819 [2024-10-16 09:35:35.091654] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:10.819 [2024-10-16 09:35:35.163112] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:11.387 09:35:35 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:11.387 09:35:35 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:21:11.387 09:35:35 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:21:11.387 09:35:35 keyring_file -- keyring/file.sh@121 -- # jq length 00:21:11.387 09:35:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:11.645 09:35:35 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:21:11.645 09:35:35 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:21:11.645 09:35:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:11.645 09:35:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:11.645 09:35:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:11.645 09:35:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:11.645 09:35:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:11.904 09:35:36 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:21:11.904 09:35:36 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:21:11.904 09:35:36 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:11.904 09:35:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:11.904 09:35:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:11.904 09:35:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:11.904 09:35:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:12.162 09:35:36 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:21:12.162 09:35:36 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:21:12.162 09:35:36 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:21:12.162 09:35:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:21:12.421 09:35:36 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:21:12.421 09:35:36 keyring_file -- keyring/file.sh@1 -- # cleanup 00:21:12.421 09:35:36 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.X2CAIHbdk4 /tmp/tmp.3ew2KEUr7W 00:21:12.421 09:35:36 keyring_file -- keyring/file.sh@20 -- # killprocess 84850 00:21:12.421 09:35:36 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 84850 ']' 00:21:12.421 09:35:36 keyring_file -- common/autotest_common.sh@954 -- # kill -0 84850 00:21:12.421 09:35:36 keyring_file -- common/autotest_common.sh@955 -- # uname 00:21:12.421 09:35:36 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:12.421 09:35:36 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84850 00:21:12.421 killing process with pid 84850 00:21:12.421 Received shutdown signal, test time was about 1.000000 seconds 00:21:12.421 00:21:12.421 Latency(us) 00:21:12.421 [2024-10-16T09:35:36.825Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:12.421 [2024-10-16T09:35:36.825Z] =================================================================================================================== 00:21:12.421 [2024-10-16T09:35:36.825Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:12.421 09:35:36 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:12.421 09:35:36 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:12.421 09:35:36 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84850' 00:21:12.421 09:35:36 keyring_file -- common/autotest_common.sh@969 -- # kill 84850 00:21:12.421 09:35:36 keyring_file -- common/autotest_common.sh@974 -- # wait 84850 00:21:12.681 09:35:37 keyring_file -- keyring/file.sh@21 -- # killprocess 84601 00:21:12.681 09:35:37 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 84601 ']' 00:21:12.681 09:35:37 keyring_file -- common/autotest_common.sh@954 -- # kill -0 84601 00:21:12.681 09:35:37 keyring_file -- common/autotest_common.sh@955 -- # uname 00:21:12.681 09:35:37 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:12.681 09:35:37 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84601 00:21:12.940 09:35:37 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:12.940 killing process with pid 84601 00:21:12.940 09:35:37 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:12.940 09:35:37 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84601' 00:21:12.940 09:35:37 keyring_file -- common/autotest_common.sh@969 -- # kill 84601 00:21:12.940 09:35:37 keyring_file -- common/autotest_common.sh@974 -- # wait 84601 00:21:13.198 ************************************ 00:21:13.198 END TEST keyring_file 00:21:13.198 ************************************ 00:21:13.198 00:21:13.198 real 0m14.810s 00:21:13.198 user 0m37.400s 00:21:13.198 sys 0m2.992s 00:21:13.198 09:35:37 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:13.198 09:35:37 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:13.198 09:35:37 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:21:13.198 09:35:37 -- spdk/autotest.sh@290 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:21:13.198 09:35:37 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:13.198 09:35:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:13.198 09:35:37 -- common/autotest_common.sh@10 -- # set +x 00:21:13.198 ************************************ 00:21:13.198 START TEST keyring_linux 00:21:13.198 ************************************ 00:21:13.198 09:35:37 keyring_linux -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:21:13.198 Joined session keyring: 741524823 00:21:13.198 * Looking for test storage... 00:21:13.198 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:21:13.198 09:35:37 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:13.198 09:35:37 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:21:13.198 09:35:37 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:13.456 09:35:37 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:13.456 09:35:37 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:13.456 09:35:37 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:13.456 09:35:37 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:13.456 09:35:37 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:21:13.456 09:35:37 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:21:13.456 09:35:37 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:21:13.456 09:35:37 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:21:13.456 09:35:37 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:21:13.456 09:35:37 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:21:13.456 09:35:37 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:21:13.456 09:35:37 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:13.456 09:35:37 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:21:13.456 09:35:37 keyring_linux -- scripts/common.sh@345 -- # : 1 00:21:13.456 09:35:37 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:13.456 09:35:37 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:13.456 09:35:37 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:21:13.456 09:35:37 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:21:13.457 09:35:37 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:13.457 09:35:37 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:21:13.457 09:35:37 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:21:13.457 09:35:37 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:21:13.457 09:35:37 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:21:13.457 09:35:37 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:13.457 09:35:37 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:21:13.457 09:35:37 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:21:13.457 09:35:37 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:13.457 09:35:37 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:13.457 09:35:37 keyring_linux -- scripts/common.sh@368 -- # return 0 00:21:13.457 09:35:37 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:13.457 09:35:37 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:13.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:13.457 --rc genhtml_branch_coverage=1 00:21:13.457 --rc genhtml_function_coverage=1 00:21:13.457 --rc genhtml_legend=1 00:21:13.457 --rc geninfo_all_blocks=1 00:21:13.457 --rc geninfo_unexecuted_blocks=1 00:21:13.457 00:21:13.457 ' 00:21:13.457 09:35:37 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:13.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:13.457 --rc genhtml_branch_coverage=1 00:21:13.457 --rc genhtml_function_coverage=1 00:21:13.457 --rc genhtml_legend=1 00:21:13.457 --rc geninfo_all_blocks=1 00:21:13.457 --rc geninfo_unexecuted_blocks=1 00:21:13.457 00:21:13.457 ' 00:21:13.457 09:35:37 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:13.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:13.457 --rc genhtml_branch_coverage=1 00:21:13.457 --rc genhtml_function_coverage=1 00:21:13.457 --rc genhtml_legend=1 00:21:13.457 --rc geninfo_all_blocks=1 00:21:13.457 --rc geninfo_unexecuted_blocks=1 00:21:13.457 00:21:13.457 ' 00:21:13.457 09:35:37 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:13.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:13.457 --rc genhtml_branch_coverage=1 00:21:13.457 --rc genhtml_function_coverage=1 00:21:13.457 --rc genhtml_legend=1 00:21:13.457 --rc geninfo_all_blocks=1 00:21:13.457 --rc geninfo_unexecuted_blocks=1 00:21:13.457 00:21:13.457 ' 00:21:13.457 09:35:37 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:21:13.457 09:35:37 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:13.457 09:35:37 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:21:13.457 09:35:37 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:13.457 09:35:37 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:13.457 09:35:37 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:13.457 09:35:37 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:13.457 09:35:37 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:13.457 09:35:37 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:13.457 09:35:37 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:13.457 09:35:37 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:13.457 09:35:37 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:13.457 09:35:37 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:13.457 09:35:37 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5989d9e2-d339-420e-a2f4-bd87604f111f 00:21:13.457 09:35:37 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5989d9e2-d339-420e-a2f4-bd87604f111f 00:21:13.457 09:35:37 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:13.457 09:35:37 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:13.457 09:35:37 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:13.457 09:35:37 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:13.457 09:35:37 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:13.457 09:35:37 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:21:13.457 09:35:37 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:13.457 09:35:37 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:13.457 09:35:37 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:13.457 09:35:37 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.457 09:35:37 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.457 09:35:37 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.457 09:35:37 keyring_linux -- paths/export.sh@5 -- # export PATH 00:21:13.457 09:35:37 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.457 09:35:37 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:21:13.457 09:35:37 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:13.457 09:35:37 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:13.457 09:35:37 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:13.457 09:35:37 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:13.457 09:35:37 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:13.457 09:35:37 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:13.457 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:13.457 09:35:37 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:13.457 09:35:37 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:13.457 09:35:37 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:13.457 09:35:37 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:21:13.457 09:35:37 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:21:13.457 09:35:37 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:21:13.457 09:35:37 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:21:13.457 09:35:37 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:21:13.457 09:35:37 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:21:13.457 09:35:37 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:21:13.457 09:35:37 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:21:13.457 09:35:37 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:21:13.457 09:35:37 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:13.457 09:35:37 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:21:13.457 09:35:37 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:21:13.457 09:35:37 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:13.457 09:35:37 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:13.457 09:35:37 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:21:13.457 09:35:37 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:21:13.457 09:35:37 keyring_linux -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:21:13.457 09:35:37 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:21:13.457 09:35:37 keyring_linux -- nvmf/common.sh@731 -- # python - 00:21:13.457 09:35:37 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:21:13.457 /tmp/:spdk-test:key0 00:21:13.457 09:35:37 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:21:13.457 09:35:37 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:21:13.457 09:35:37 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:21:13.457 09:35:37 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:21:13.457 09:35:37 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:21:13.457 09:35:37 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:21:13.457 09:35:37 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:21:13.457 09:35:37 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:21:13.457 09:35:37 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:21:13.457 09:35:37 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:21:13.457 09:35:37 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:21:13.457 09:35:37 keyring_linux -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:21:13.457 09:35:37 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:21:13.457 09:35:37 keyring_linux -- nvmf/common.sh@731 -- # python - 00:21:13.457 09:35:37 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:21:13.457 /tmp/:spdk-test:key1 00:21:13.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:13.457 09:35:37 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:21:13.457 09:35:37 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=84977 00:21:13.457 09:35:37 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:13.457 09:35:37 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 84977 00:21:13.457 09:35:37 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 84977 ']' 00:21:13.457 09:35:37 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:13.457 09:35:37 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:13.458 09:35:37 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:13.458 09:35:37 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:13.458 09:35:37 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:13.458 [2024-10-16 09:35:37.858833] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:21:13.458 [2024-10-16 09:35:37.858969] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84977 ] 00:21:13.716 [2024-10-16 09:35:37.997085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.716 [2024-10-16 09:35:38.037964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:13.716 [2024-10-16 09:35:38.102692] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:13.975 09:35:38 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:13.975 09:35:38 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:21:13.975 09:35:38 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:21:13.975 09:35:38 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.975 09:35:38 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:13.975 [2024-10-16 09:35:38.288708] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:13.975 null0 00:21:13.975 [2024-10-16 09:35:38.320684] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:13.975 [2024-10-16 09:35:38.320884] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:13.975 09:35:38 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.975 09:35:38 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:21:13.975 690092824 00:21:13.975 09:35:38 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:21:13.975 494153858 00:21:13.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:13.975 09:35:38 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=84988 00:21:13.975 09:35:38 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:21:13.975 09:35:38 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 84988 /var/tmp/bperf.sock 00:21:13.975 09:35:38 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 84988 ']' 00:21:13.975 09:35:38 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:13.975 09:35:38 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:13.975 09:35:38 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:13.975 09:35:38 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:13.975 09:35:38 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:14.233 [2024-10-16 09:35:38.409433] Starting SPDK v25.01-pre git sha1 27a8e04f9 / DPDK 24.03.0 initialization... 00:21:14.233 [2024-10-16 09:35:38.409534] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84988 ] 00:21:14.233 [2024-10-16 09:35:38.547585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.233 [2024-10-16 09:35:38.614420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:14.492 09:35:38 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:14.492 09:35:38 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:21:14.492 09:35:38 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:21:14.492 09:35:38 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:21:14.751 09:35:38 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:21:14.751 09:35:38 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:15.010 [2024-10-16 09:35:39.171976] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:15.010 09:35:39 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:21:15.010 09:35:39 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:21:15.268 [2024-10-16 09:35:39.489278] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:15.268 nvme0n1 00:21:15.268 09:35:39 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:21:15.268 09:35:39 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:21:15.268 09:35:39 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:21:15.268 09:35:39 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:21:15.268 09:35:39 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:15.268 09:35:39 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:21:15.527 09:35:39 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:21:15.527 09:35:39 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:21:15.527 09:35:39 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:21:15.527 09:35:39 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:21:15.527 09:35:39 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:15.527 09:35:39 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:15.527 09:35:39 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:21:15.785 09:35:40 keyring_linux -- keyring/linux.sh@25 -- # sn=690092824 00:21:15.785 09:35:40 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:21:15.785 09:35:40 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:21:15.785 09:35:40 keyring_linux -- keyring/linux.sh@26 -- # [[ 690092824 == \6\9\0\0\9\2\8\2\4 ]] 00:21:15.785 09:35:40 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 690092824 00:21:15.785 09:35:40 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:21:15.785 09:35:40 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:15.785 Running I/O for 1 seconds... 00:21:17.161 14644.00 IOPS, 57.20 MiB/s 00:21:17.161 Latency(us) 00:21:17.161 [2024-10-16T09:35:41.565Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:17.161 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:17.161 nvme0n1 : 1.01 14658.34 57.26 0.00 0.00 8695.17 2591.65 11856.06 00:21:17.161 [2024-10-16T09:35:41.565Z] =================================================================================================================== 00:21:17.161 [2024-10-16T09:35:41.565Z] Total : 14658.34 57.26 0.00 0.00 8695.17 2591.65 11856.06 00:21:17.161 { 00:21:17.161 "results": [ 00:21:17.161 { 00:21:17.161 "job": "nvme0n1", 00:21:17.161 "core_mask": "0x2", 00:21:17.161 "workload": "randread", 00:21:17.161 "status": "finished", 00:21:17.161 "queue_depth": 128, 00:21:17.161 "io_size": 4096, 00:21:17.161 "runtime": 1.007822, 00:21:17.161 "iops": 14658.342445392143, 00:21:17.161 "mibps": 57.25915017731306, 00:21:17.161 "io_failed": 0, 00:21:17.161 "io_timeout": 0, 00:21:17.161 "avg_latency_us": 8695.170573343261, 00:21:17.161 "min_latency_us": 2591.650909090909, 00:21:17.161 "max_latency_us": 11856.058181818182 00:21:17.161 } 00:21:17.161 ], 00:21:17.161 "core_count": 1 00:21:17.161 } 00:21:17.161 09:35:41 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:17.161 09:35:41 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:17.161 09:35:41 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:21:17.161 09:35:41 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:21:17.161 09:35:41 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:21:17.161 09:35:41 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:21:17.161 09:35:41 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:21:17.161 09:35:41 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:17.420 09:35:41 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:21:17.420 09:35:41 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:21:17.420 09:35:41 keyring_linux -- keyring/linux.sh@23 -- # return 00:21:17.420 09:35:41 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:17.420 09:35:41 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:21:17.420 09:35:41 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:17.420 09:35:41 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:21:17.420 09:35:41 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:17.420 09:35:41 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:21:17.420 09:35:41 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:17.420 09:35:41 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:17.420 09:35:41 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:17.678 [2024-10-16 09:35:41.993831] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:17.678 [2024-10-16 09:35:41.994034] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa443f0 (107): Transport endpoint is not connected 00:21:17.678 [2024-10-16 09:35:41.995025] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa443f0 (9): Bad file descriptor 00:21:17.678 [2024-10-16 09:35:41.996025] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:17.678 [2024-10-16 09:35:41.996061] nvme.c: 721:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:21:17.678 [2024-10-16 09:35:41.996087] nvme.c: 897:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:21:17.678 [2024-10-16 09:35:41.996098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:17.678 request: 00:21:17.678 { 00:21:17.678 "name": "nvme0", 00:21:17.678 "trtype": "tcp", 00:21:17.678 "traddr": "127.0.0.1", 00:21:17.678 "adrfam": "ipv4", 00:21:17.678 "trsvcid": "4420", 00:21:17.678 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:17.678 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:17.678 "prchk_reftag": false, 00:21:17.678 "prchk_guard": false, 00:21:17.678 "hdgst": false, 00:21:17.678 "ddgst": false, 00:21:17.678 "psk": ":spdk-test:key1", 00:21:17.678 "allow_unrecognized_csi": false, 00:21:17.678 "method": "bdev_nvme_attach_controller", 00:21:17.678 "req_id": 1 00:21:17.678 } 00:21:17.678 Got JSON-RPC error response 00:21:17.678 response: 00:21:17.678 { 00:21:17.678 "code": -5, 00:21:17.678 "message": "Input/output error" 00:21:17.678 } 00:21:17.678 09:35:42 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:21:17.678 09:35:42 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:17.678 09:35:42 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:17.679 09:35:42 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:17.679 09:35:42 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:21:17.679 09:35:42 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:21:17.679 09:35:42 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:21:17.679 09:35:42 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:21:17.679 09:35:42 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:21:17.679 09:35:42 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:21:17.679 09:35:42 keyring_linux -- keyring/linux.sh@33 -- # sn=690092824 00:21:17.679 09:35:42 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 690092824 00:21:17.679 1 links removed 00:21:17.679 09:35:42 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:21:17.679 09:35:42 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:21:17.679 09:35:42 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:21:17.679 09:35:42 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:21:17.679 09:35:42 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:21:17.679 09:35:42 keyring_linux -- keyring/linux.sh@33 -- # sn=494153858 00:21:17.679 09:35:42 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 494153858 00:21:17.679 1 links removed 00:21:17.679 09:35:42 keyring_linux -- keyring/linux.sh@41 -- # killprocess 84988 00:21:17.679 09:35:42 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 84988 ']' 00:21:17.679 09:35:42 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 84988 00:21:17.679 09:35:42 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:21:17.679 09:35:42 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:17.679 09:35:42 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84988 00:21:17.679 killing process with pid 84988 00:21:17.679 Received shutdown signal, test time was about 1.000000 seconds 00:21:17.679 00:21:17.679 Latency(us) 00:21:17.679 [2024-10-16T09:35:42.083Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:17.679 [2024-10-16T09:35:42.083Z] =================================================================================================================== 00:21:17.679 [2024-10-16T09:35:42.083Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:17.679 09:35:42 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:17.679 09:35:42 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:17.679 09:35:42 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84988' 00:21:17.679 09:35:42 keyring_linux -- common/autotest_common.sh@969 -- # kill 84988 00:21:17.679 09:35:42 keyring_linux -- common/autotest_common.sh@974 -- # wait 84988 00:21:17.938 09:35:42 keyring_linux -- keyring/linux.sh@42 -- # killprocess 84977 00:21:17.938 09:35:42 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 84977 ']' 00:21:17.938 09:35:42 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 84977 00:21:17.938 09:35:42 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:21:17.938 09:35:42 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:17.938 09:35:42 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84977 00:21:17.938 killing process with pid 84977 00:21:17.938 09:35:42 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:17.938 09:35:42 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:17.938 09:35:42 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84977' 00:21:17.938 09:35:42 keyring_linux -- common/autotest_common.sh@969 -- # kill 84977 00:21:17.938 09:35:42 keyring_linux -- common/autotest_common.sh@974 -- # wait 84977 00:21:18.505 00:21:18.505 real 0m5.159s 00:21:18.505 user 0m10.111s 00:21:18.505 sys 0m1.490s 00:21:18.505 09:35:42 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:18.505 09:35:42 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:18.505 ************************************ 00:21:18.505 END TEST keyring_linux 00:21:18.505 ************************************ 00:21:18.505 09:35:42 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:21:18.505 09:35:42 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:21:18.505 09:35:42 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:21:18.505 09:35:42 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:21:18.505 09:35:42 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:21:18.505 09:35:42 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:21:18.505 09:35:42 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:21:18.505 09:35:42 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:21:18.505 09:35:42 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:21:18.505 09:35:42 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:21:18.505 09:35:42 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:21:18.505 09:35:42 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:21:18.505 09:35:42 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:21:18.505 09:35:42 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:21:18.505 09:35:42 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:21:18.505 09:35:42 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:21:18.505 09:35:42 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:21:18.505 09:35:42 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:18.505 09:35:42 -- common/autotest_common.sh@10 -- # set +x 00:21:18.505 09:35:42 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:21:18.505 09:35:42 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:21:18.505 09:35:42 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:21:18.505 09:35:42 -- common/autotest_common.sh@10 -- # set +x 00:21:20.419 INFO: APP EXITING 00:21:20.419 INFO: killing all VMs 00:21:20.419 INFO: killing vhost app 00:21:20.419 INFO: EXIT DONE 00:21:20.696 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:20.955 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:21:20.955 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:21:21.522 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:21.522 Cleaning 00:21:21.522 Removing: /var/run/dpdk/spdk0/config 00:21:21.522 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:21:21.523 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:21:21.523 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:21:21.523 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:21:21.523 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:21:21.523 Removing: /var/run/dpdk/spdk0/hugepage_info 00:21:21.523 Removing: /var/run/dpdk/spdk1/config 00:21:21.523 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:21:21.523 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:21:21.523 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:21:21.523 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:21:21.523 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:21:21.523 Removing: /var/run/dpdk/spdk1/hugepage_info 00:21:21.523 Removing: /var/run/dpdk/spdk2/config 00:21:21.523 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:21:21.523 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:21:21.523 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:21:21.523 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:21:21.523 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:21:21.523 Removing: /var/run/dpdk/spdk2/hugepage_info 00:21:21.782 Removing: /var/run/dpdk/spdk3/config 00:21:21.782 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:21:21.782 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:21:21.782 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:21:21.782 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:21:21.782 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:21:21.782 Removing: /var/run/dpdk/spdk3/hugepage_info 00:21:21.782 Removing: /var/run/dpdk/spdk4/config 00:21:21.782 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:21:21.782 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:21:21.782 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:21:21.782 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:21:21.782 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:21:21.782 Removing: /var/run/dpdk/spdk4/hugepage_info 00:21:21.782 Removing: /dev/shm/nvmf_trace.0 00:21:21.782 Removing: /dev/shm/spdk_tgt_trace.pid56857 00:21:21.782 Removing: /var/run/dpdk/spdk0 00:21:21.782 Removing: /var/run/dpdk/spdk1 00:21:21.782 Removing: /var/run/dpdk/spdk2 00:21:21.782 Removing: /var/run/dpdk/spdk3 00:21:21.782 Removing: /var/run/dpdk/spdk4 00:21:21.782 Removing: /var/run/dpdk/spdk_pid56704 00:21:21.782 Removing: /var/run/dpdk/spdk_pid56857 00:21:21.782 Removing: /var/run/dpdk/spdk_pid57056 00:21:21.782 Removing: /var/run/dpdk/spdk_pid57142 00:21:21.782 Removing: /var/run/dpdk/spdk_pid57162 00:21:21.782 Removing: /var/run/dpdk/spdk_pid57272 00:21:21.782 Removing: /var/run/dpdk/spdk_pid57282 00:21:21.782 Removing: /var/run/dpdk/spdk_pid57422 00:21:21.782 Removing: /var/run/dpdk/spdk_pid57623 00:21:21.782 Removing: /var/run/dpdk/spdk_pid57777 00:21:21.782 Removing: /var/run/dpdk/spdk_pid57850 00:21:21.782 Removing: /var/run/dpdk/spdk_pid57939 00:21:21.782 Removing: /var/run/dpdk/spdk_pid58025 00:21:21.782 Removing: /var/run/dpdk/spdk_pid58110 00:21:21.782 Removing: /var/run/dpdk/spdk_pid58143 00:21:21.782 Removing: /var/run/dpdk/spdk_pid58178 00:21:21.782 Removing: /var/run/dpdk/spdk_pid58248 00:21:21.782 Removing: /var/run/dpdk/spdk_pid58340 00:21:21.782 Removing: /var/run/dpdk/spdk_pid58784 00:21:21.782 Removing: /var/run/dpdk/spdk_pid58823 00:21:21.782 Removing: /var/run/dpdk/spdk_pid58872 00:21:21.782 Removing: /var/run/dpdk/spdk_pid58875 00:21:21.782 Removing: /var/run/dpdk/spdk_pid58952 00:21:21.782 Removing: /var/run/dpdk/spdk_pid58960 00:21:21.782 Removing: /var/run/dpdk/spdk_pid59032 00:21:21.782 Removing: /var/run/dpdk/spdk_pid59041 00:21:21.782 Removing: /var/run/dpdk/spdk_pid59087 00:21:21.782 Removing: /var/run/dpdk/spdk_pid59105 00:21:21.782 Removing: /var/run/dpdk/spdk_pid59150 00:21:21.782 Removing: /var/run/dpdk/spdk_pid59161 00:21:21.782 Removing: /var/run/dpdk/spdk_pid59297 00:21:21.782 Removing: /var/run/dpdk/spdk_pid59327 00:21:21.782 Removing: /var/run/dpdk/spdk_pid59410 00:21:21.782 Removing: /var/run/dpdk/spdk_pid59741 00:21:21.782 Removing: /var/run/dpdk/spdk_pid59759 00:21:21.782 Removing: /var/run/dpdk/spdk_pid59790 00:21:21.782 Removing: /var/run/dpdk/spdk_pid59803 00:21:21.782 Removing: /var/run/dpdk/spdk_pid59819 00:21:21.782 Removing: /var/run/dpdk/spdk_pid59838 00:21:21.782 Removing: /var/run/dpdk/spdk_pid59857 00:21:21.782 Removing: /var/run/dpdk/spdk_pid59867 00:21:21.782 Removing: /var/run/dpdk/spdk_pid59891 00:21:21.782 Removing: /var/run/dpdk/spdk_pid59905 00:21:21.782 Removing: /var/run/dpdk/spdk_pid59926 00:21:21.782 Removing: /var/run/dpdk/spdk_pid59945 00:21:21.782 Removing: /var/run/dpdk/spdk_pid59953 00:21:21.782 Removing: /var/run/dpdk/spdk_pid59974 00:21:21.782 Removing: /var/run/dpdk/spdk_pid59993 00:21:21.782 Removing: /var/run/dpdk/spdk_pid60001 00:21:21.782 Removing: /var/run/dpdk/spdk_pid60022 00:21:21.782 Removing: /var/run/dpdk/spdk_pid60041 00:21:21.782 Removing: /var/run/dpdk/spdk_pid60060 00:21:21.782 Removing: /var/run/dpdk/spdk_pid60070 00:21:21.782 Removing: /var/run/dpdk/spdk_pid60106 00:21:21.782 Removing: /var/run/dpdk/spdk_pid60125 00:21:21.782 Removing: /var/run/dpdk/spdk_pid60149 00:21:21.782 Removing: /var/run/dpdk/spdk_pid60223 00:21:21.782 Removing: /var/run/dpdk/spdk_pid60257 00:21:21.782 Removing: /var/run/dpdk/spdk_pid60261 00:21:21.782 Removing: /var/run/dpdk/spdk_pid60295 00:21:21.782 Removing: /var/run/dpdk/spdk_pid60306 00:21:21.782 Removing: /var/run/dpdk/spdk_pid60313 00:21:21.782 Removing: /var/run/dpdk/spdk_pid60361 00:21:21.782 Removing: /var/run/dpdk/spdk_pid60369 00:21:21.782 Removing: /var/run/dpdk/spdk_pid60403 00:21:22.041 Removing: /var/run/dpdk/spdk_pid60407 00:21:22.041 Removing: /var/run/dpdk/spdk_pid60422 00:21:22.041 Removing: /var/run/dpdk/spdk_pid60426 00:21:22.041 Removing: /var/run/dpdk/spdk_pid60437 00:21:22.041 Removing: /var/run/dpdk/spdk_pid60445 00:21:22.041 Removing: /var/run/dpdk/spdk_pid60457 00:21:22.041 Removing: /var/run/dpdk/spdk_pid60466 00:21:22.041 Removing: /var/run/dpdk/spdk_pid60495 00:21:22.041 Removing: /var/run/dpdk/spdk_pid60521 00:21:22.041 Removing: /var/run/dpdk/spdk_pid60535 00:21:22.041 Removing: /var/run/dpdk/spdk_pid60559 00:21:22.041 Removing: /var/run/dpdk/spdk_pid60573 00:21:22.041 Removing: /var/run/dpdk/spdk_pid60576 00:21:22.041 Removing: /var/run/dpdk/spdk_pid60621 00:21:22.041 Removing: /var/run/dpdk/spdk_pid60628 00:21:22.041 Removing: /var/run/dpdk/spdk_pid60660 00:21:22.041 Removing: /var/run/dpdk/spdk_pid60668 00:21:22.041 Removing: /var/run/dpdk/spdk_pid60675 00:21:22.041 Removing: /var/run/dpdk/spdk_pid60683 00:21:22.041 Removing: /var/run/dpdk/spdk_pid60690 00:21:22.041 Removing: /var/run/dpdk/spdk_pid60698 00:21:22.041 Removing: /var/run/dpdk/spdk_pid60705 00:21:22.041 Removing: /var/run/dpdk/spdk_pid60713 00:21:22.041 Removing: /var/run/dpdk/spdk_pid60795 00:21:22.041 Removing: /var/run/dpdk/spdk_pid60848 00:21:22.041 Removing: /var/run/dpdk/spdk_pid60955 00:21:22.041 Removing: /var/run/dpdk/spdk_pid60994 00:21:22.041 Removing: /var/run/dpdk/spdk_pid61039 00:21:22.041 Removing: /var/run/dpdk/spdk_pid61058 00:21:22.041 Removing: /var/run/dpdk/spdk_pid61070 00:21:22.041 Removing: /var/run/dpdk/spdk_pid61090 00:21:22.041 Removing: /var/run/dpdk/spdk_pid61127 00:21:22.041 Removing: /var/run/dpdk/spdk_pid61137 00:21:22.041 Removing: /var/run/dpdk/spdk_pid61217 00:21:22.041 Removing: /var/run/dpdk/spdk_pid61243 00:21:22.041 Removing: /var/run/dpdk/spdk_pid61277 00:21:22.041 Removing: /var/run/dpdk/spdk_pid61347 00:21:22.041 Removing: /var/run/dpdk/spdk_pid61409 00:21:22.041 Removing: /var/run/dpdk/spdk_pid61438 00:21:22.041 Removing: /var/run/dpdk/spdk_pid61537 00:21:22.041 Removing: /var/run/dpdk/spdk_pid61580 00:21:22.041 Removing: /var/run/dpdk/spdk_pid61612 00:21:22.041 Removing: /var/run/dpdk/spdk_pid61844 00:21:22.041 Removing: /var/run/dpdk/spdk_pid61942 00:21:22.041 Removing: /var/run/dpdk/spdk_pid61969 00:21:22.041 Removing: /var/run/dpdk/spdk_pid61994 00:21:22.041 Removing: /var/run/dpdk/spdk_pid62028 00:21:22.041 Removing: /var/run/dpdk/spdk_pid62067 00:21:22.041 Removing: /var/run/dpdk/spdk_pid62099 00:21:22.041 Removing: /var/run/dpdk/spdk_pid62132 00:21:22.041 Removing: /var/run/dpdk/spdk_pid62515 00:21:22.041 Removing: /var/run/dpdk/spdk_pid62555 00:21:22.041 Removing: /var/run/dpdk/spdk_pid62892 00:21:22.041 Removing: /var/run/dpdk/spdk_pid63350 00:21:22.041 Removing: /var/run/dpdk/spdk_pid63614 00:21:22.041 Removing: /var/run/dpdk/spdk_pid64463 00:21:22.041 Removing: /var/run/dpdk/spdk_pid65362 00:21:22.041 Removing: /var/run/dpdk/spdk_pid65479 00:21:22.041 Removing: /var/run/dpdk/spdk_pid65552 00:21:22.041 Removing: /var/run/dpdk/spdk_pid66954 00:21:22.041 Removing: /var/run/dpdk/spdk_pid67268 00:21:22.041 Removing: /var/run/dpdk/spdk_pid70861 00:21:22.041 Removing: /var/run/dpdk/spdk_pid71216 00:21:22.041 Removing: /var/run/dpdk/spdk_pid71329 00:21:22.041 Removing: /var/run/dpdk/spdk_pid71469 00:21:22.041 Removing: /var/run/dpdk/spdk_pid71490 00:21:22.041 Removing: /var/run/dpdk/spdk_pid71511 00:21:22.041 Removing: /var/run/dpdk/spdk_pid71532 00:21:22.041 Removing: /var/run/dpdk/spdk_pid71610 00:21:22.041 Removing: /var/run/dpdk/spdk_pid71734 00:21:22.041 Removing: /var/run/dpdk/spdk_pid71883 00:21:22.041 Removing: /var/run/dpdk/spdk_pid71959 00:21:22.041 Removing: /var/run/dpdk/spdk_pid72146 00:21:22.041 Removing: /var/run/dpdk/spdk_pid72208 00:21:22.041 Removing: /var/run/dpdk/spdk_pid72288 00:21:22.041 Removing: /var/run/dpdk/spdk_pid72639 00:21:22.041 Removing: /var/run/dpdk/spdk_pid73059 00:21:22.041 Removing: /var/run/dpdk/spdk_pid73060 00:21:22.041 Removing: /var/run/dpdk/spdk_pid73061 00:21:22.041 Removing: /var/run/dpdk/spdk_pid73320 00:21:22.041 Removing: /var/run/dpdk/spdk_pid73640 00:21:22.041 Removing: /var/run/dpdk/spdk_pid73643 00:21:22.041 Removing: /var/run/dpdk/spdk_pid73966 00:21:22.041 Removing: /var/run/dpdk/spdk_pid73981 00:21:22.300 Removing: /var/run/dpdk/spdk_pid73995 00:21:22.300 Removing: /var/run/dpdk/spdk_pid74026 00:21:22.300 Removing: /var/run/dpdk/spdk_pid74031 00:21:22.300 Removing: /var/run/dpdk/spdk_pid74381 00:21:22.300 Removing: /var/run/dpdk/spdk_pid74432 00:21:22.300 Removing: /var/run/dpdk/spdk_pid74751 00:21:22.300 Removing: /var/run/dpdk/spdk_pid74947 00:21:22.300 Removing: /var/run/dpdk/spdk_pid75362 00:21:22.300 Removing: /var/run/dpdk/spdk_pid75896 00:21:22.300 Removing: /var/run/dpdk/spdk_pid76764 00:21:22.300 Removing: /var/run/dpdk/spdk_pid77394 00:21:22.300 Removing: /var/run/dpdk/spdk_pid77396 00:21:22.300 Removing: /var/run/dpdk/spdk_pid79397 00:21:22.300 Removing: /var/run/dpdk/spdk_pid79451 00:21:22.300 Removing: /var/run/dpdk/spdk_pid79498 00:21:22.300 Removing: /var/run/dpdk/spdk_pid79552 00:21:22.300 Removing: /var/run/dpdk/spdk_pid79647 00:21:22.300 Removing: /var/run/dpdk/spdk_pid79700 00:21:22.300 Removing: /var/run/dpdk/spdk_pid79751 00:21:22.300 Removing: /var/run/dpdk/spdk_pid79804 00:21:22.300 Removing: /var/run/dpdk/spdk_pid80168 00:21:22.300 Removing: /var/run/dpdk/spdk_pid81380 00:21:22.300 Removing: /var/run/dpdk/spdk_pid81519 00:21:22.300 Removing: /var/run/dpdk/spdk_pid81756 00:21:22.300 Removing: /var/run/dpdk/spdk_pid82344 00:21:22.300 Removing: /var/run/dpdk/spdk_pid82508 00:21:22.300 Removing: /var/run/dpdk/spdk_pid82666 00:21:22.300 Removing: /var/run/dpdk/spdk_pid82764 00:21:22.301 Removing: /var/run/dpdk/spdk_pid82922 00:21:22.301 Removing: /var/run/dpdk/spdk_pid83035 00:21:22.301 Removing: /var/run/dpdk/spdk_pid83742 00:21:22.301 Removing: /var/run/dpdk/spdk_pid83777 00:21:22.301 Removing: /var/run/dpdk/spdk_pid83812 00:21:22.301 Removing: /var/run/dpdk/spdk_pid84062 00:21:22.301 Removing: /var/run/dpdk/spdk_pid84097 00:21:22.301 Removing: /var/run/dpdk/spdk_pid84131 00:21:22.301 Removing: /var/run/dpdk/spdk_pid84601 00:21:22.301 Removing: /var/run/dpdk/spdk_pid84612 00:21:22.301 Removing: /var/run/dpdk/spdk_pid84850 00:21:22.301 Removing: /var/run/dpdk/spdk_pid84977 00:21:22.301 Removing: /var/run/dpdk/spdk_pid84988 00:21:22.301 Clean 00:21:22.301 09:35:46 -- common/autotest_common.sh@1451 -- # return 0 00:21:22.301 09:35:46 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:21:22.301 09:35:46 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:22.301 09:35:46 -- common/autotest_common.sh@10 -- # set +x 00:21:22.301 09:35:46 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:21:22.301 09:35:46 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:22.301 09:35:46 -- common/autotest_common.sh@10 -- # set +x 00:21:22.559 09:35:46 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:22.559 09:35:46 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:21:22.559 09:35:46 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:21:22.559 09:35:46 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:21:22.559 09:35:46 -- spdk/autotest.sh@394 -- # hostname 00:21:22.559 09:35:46 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:21:22.559 geninfo: WARNING: invalid characters removed from testname! 00:21:44.499 09:36:08 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:47.809 09:36:11 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:49.710 09:36:13 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:52.240 09:36:16 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:54.773 09:36:19 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:57.309 09:36:21 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:59.871 09:36:23 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:21:59.871 09:36:23 -- common/autotest_common.sh@1690 -- $ [[ y == y ]] 00:21:59.871 09:36:23 -- common/autotest_common.sh@1691 -- $ lcov --version 00:21:59.871 09:36:23 -- common/autotest_common.sh@1691 -- $ awk '{print $NF}' 00:21:59.871 09:36:24 -- common/autotest_common.sh@1691 -- $ lt 1.15 2 00:21:59.871 09:36:24 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:21:59.871 09:36:24 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:21:59.871 09:36:24 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:21:59.871 09:36:24 -- scripts/common.sh@336 -- $ IFS=.-: 00:21:59.871 09:36:24 -- scripts/common.sh@336 -- $ read -ra ver1 00:21:59.871 09:36:24 -- scripts/common.sh@337 -- $ IFS=.-: 00:21:59.871 09:36:24 -- scripts/common.sh@337 -- $ read -ra ver2 00:21:59.871 09:36:24 -- scripts/common.sh@338 -- $ local 'op=<' 00:21:59.871 09:36:24 -- scripts/common.sh@340 -- $ ver1_l=2 00:21:59.871 09:36:24 -- scripts/common.sh@341 -- $ ver2_l=1 00:21:59.871 09:36:24 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:21:59.871 09:36:24 -- scripts/common.sh@344 -- $ case "$op" in 00:21:59.871 09:36:24 -- scripts/common.sh@345 -- $ : 1 00:21:59.871 09:36:24 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:21:59.871 09:36:24 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:59.871 09:36:24 -- scripts/common.sh@365 -- $ decimal 1 00:21:59.871 09:36:24 -- scripts/common.sh@353 -- $ local d=1 00:21:59.871 09:36:24 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:21:59.871 09:36:24 -- scripts/common.sh@355 -- $ echo 1 00:21:59.871 09:36:24 -- scripts/common.sh@365 -- $ ver1[v]=1 00:21:59.871 09:36:24 -- scripts/common.sh@366 -- $ decimal 2 00:21:59.871 09:36:24 -- scripts/common.sh@353 -- $ local d=2 00:21:59.871 09:36:24 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:21:59.871 09:36:24 -- scripts/common.sh@355 -- $ echo 2 00:21:59.871 09:36:24 -- scripts/common.sh@366 -- $ ver2[v]=2 00:21:59.871 09:36:24 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:21:59.871 09:36:24 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:21:59.871 09:36:24 -- scripts/common.sh@368 -- $ return 0 00:21:59.871 09:36:24 -- common/autotest_common.sh@1692 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:59.871 09:36:24 -- common/autotest_common.sh@1704 -- $ export 'LCOV_OPTS= 00:21:59.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.871 --rc genhtml_branch_coverage=1 00:21:59.871 --rc genhtml_function_coverage=1 00:21:59.871 --rc genhtml_legend=1 00:21:59.871 --rc geninfo_all_blocks=1 00:21:59.871 --rc geninfo_unexecuted_blocks=1 00:21:59.871 00:21:59.871 ' 00:21:59.871 09:36:24 -- common/autotest_common.sh@1704 -- $ LCOV_OPTS=' 00:21:59.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.871 --rc genhtml_branch_coverage=1 00:21:59.871 --rc genhtml_function_coverage=1 00:21:59.871 --rc genhtml_legend=1 00:21:59.871 --rc geninfo_all_blocks=1 00:21:59.871 --rc geninfo_unexecuted_blocks=1 00:21:59.871 00:21:59.871 ' 00:21:59.871 09:36:24 -- common/autotest_common.sh@1705 -- $ export 'LCOV=lcov 00:21:59.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.871 --rc genhtml_branch_coverage=1 00:21:59.871 --rc genhtml_function_coverage=1 00:21:59.871 --rc genhtml_legend=1 00:21:59.871 --rc geninfo_all_blocks=1 00:21:59.871 --rc geninfo_unexecuted_blocks=1 00:21:59.871 00:21:59.871 ' 00:21:59.871 09:36:24 -- common/autotest_common.sh@1705 -- $ LCOV='lcov 00:21:59.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.871 --rc genhtml_branch_coverage=1 00:21:59.871 --rc genhtml_function_coverage=1 00:21:59.871 --rc genhtml_legend=1 00:21:59.871 --rc geninfo_all_blocks=1 00:21:59.871 --rc geninfo_unexecuted_blocks=1 00:21:59.871 00:21:59.871 ' 00:21:59.871 09:36:24 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:59.871 09:36:24 -- scripts/common.sh@15 -- $ shopt -s extglob 00:21:59.871 09:36:24 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:21:59.871 09:36:24 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:59.871 09:36:24 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:59.871 09:36:24 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.871 09:36:24 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.871 09:36:24 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.871 09:36:24 -- paths/export.sh@5 -- $ export PATH 00:21:59.871 09:36:24 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.871 09:36:24 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:21:59.871 09:36:24 -- common/autobuild_common.sh@486 -- $ date +%s 00:21:59.871 09:36:24 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1729071384.XXXXXX 00:21:59.871 09:36:24 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1729071384.8zx7r7 00:21:59.871 09:36:24 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:21:59.871 09:36:24 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:21:59.871 09:36:24 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:21:59.871 09:36:24 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:21:59.872 09:36:24 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:21:59.872 09:36:24 -- common/autobuild_common.sh@502 -- $ get_config_params 00:21:59.872 09:36:24 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:21:59.872 09:36:24 -- common/autotest_common.sh@10 -- $ set +x 00:21:59.872 09:36:24 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:21:59.872 09:36:24 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:21:59.872 09:36:24 -- pm/common@17 -- $ local monitor 00:21:59.872 09:36:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:21:59.872 09:36:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:21:59.872 09:36:24 -- pm/common@25 -- $ sleep 1 00:21:59.872 09:36:24 -- pm/common@21 -- $ date +%s 00:21:59.872 09:36:24 -- pm/common@21 -- $ date +%s 00:21:59.872 09:36:24 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1729071384 00:21:59.872 09:36:24 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1729071384 00:21:59.872 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1729071384_collect-cpu-load.pm.log 00:21:59.872 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1729071384_collect-vmstat.pm.log 00:22:00.807 09:36:25 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:22:00.807 09:36:25 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:22:00.807 09:36:25 -- spdk/autopackage.sh@14 -- $ timing_finish 00:22:00.807 09:36:25 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:22:00.807 09:36:25 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:22:00.807 09:36:25 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:00.807 09:36:25 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:22:00.807 09:36:25 -- pm/common@29 -- $ signal_monitor_resources TERM 00:22:00.807 09:36:25 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:22:00.807 09:36:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:00.807 09:36:25 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:22:00.807 09:36:25 -- pm/common@44 -- $ pid=86724 00:22:00.807 09:36:25 -- pm/common@50 -- $ kill -TERM 86724 00:22:00.807 09:36:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:00.807 09:36:25 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:22:00.807 09:36:25 -- pm/common@44 -- $ pid=86725 00:22:00.807 09:36:25 -- pm/common@50 -- $ kill -TERM 86725 00:22:00.807 + [[ -n 5368 ]] 00:22:00.807 + sudo kill 5368 00:22:00.816 [Pipeline] } 00:22:00.833 [Pipeline] // timeout 00:22:00.838 [Pipeline] } 00:22:00.853 [Pipeline] // stage 00:22:00.858 [Pipeline] } 00:22:00.873 [Pipeline] // catchError 00:22:00.882 [Pipeline] stage 00:22:00.885 [Pipeline] { (Stop VM) 00:22:00.897 [Pipeline] sh 00:22:01.177 + vagrant halt 00:22:03.710 ==> default: Halting domain... 00:22:10.284 [Pipeline] sh 00:22:10.563 + vagrant destroy -f 00:22:13.851 ==> default: Removing domain... 00:22:13.863 [Pipeline] sh 00:22:14.143 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:22:14.152 [Pipeline] } 00:22:14.168 [Pipeline] // stage 00:22:14.173 [Pipeline] } 00:22:14.187 [Pipeline] // dir 00:22:14.193 [Pipeline] } 00:22:14.209 [Pipeline] // wrap 00:22:14.216 [Pipeline] } 00:22:14.229 [Pipeline] // catchError 00:22:14.239 [Pipeline] stage 00:22:14.242 [Pipeline] { (Epilogue) 00:22:14.256 [Pipeline] sh 00:22:14.539 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:22:19.864 [Pipeline] catchError 00:22:19.867 [Pipeline] { 00:22:19.881 [Pipeline] sh 00:22:20.163 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:22:20.163 Artifacts sizes are good 00:22:20.172 [Pipeline] } 00:22:20.188 [Pipeline] // catchError 00:22:20.200 [Pipeline] archiveArtifacts 00:22:20.208 Archiving artifacts 00:22:20.376 [Pipeline] cleanWs 00:22:20.389 [WS-CLEANUP] Deleting project workspace... 00:22:20.389 [WS-CLEANUP] Deferred wipeout is used... 00:22:20.395 [WS-CLEANUP] done 00:22:20.397 [Pipeline] } 00:22:20.413 [Pipeline] // stage 00:22:20.418 [Pipeline] } 00:22:20.432 [Pipeline] // node 00:22:20.438 [Pipeline] End of Pipeline 00:22:20.480 Finished: SUCCESS